Science.gov

Sample records for 2-d image code

  1. Snapshot 2D tomography via coded aperture x-ray scatter imaging

    PubMed Central

    MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.

    2015-01-01

    This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254

  2. Embedded morphological dilation coding for 2D and 3D images

    NASA Astrophysics Data System (ADS)

    Lazzaroni, Fabio; Signoroni, Alberto; Leonardi, Riccardo

    2002-01-01

    Current wavelet-based image coders obtain high performance thanks to the identification and the exploitation of the statistical properties of natural images in the transformed domain. Zerotree-based algorithms, as Embedded Zerotree Wavelets (EZW) and Set Partitioning In Hierarchical Trees (SPIHT), offer high Rate-Distortion (RD) coding performance and low computational complexity by exploiting statistical dependencies among insignificant coefficients on hierarchical subband structures. Another possible approach tries to predict the clusters of significant coefficients by means of some form of morphological dilation. An example of a morphology-based coder is the Significance-Linked Connected Component Analysis (SLCCA) that has shown performance which are comparable to the zerotree-based coders but is not embedded. A new embedded bit-plane coder is proposed here based on morphological dilation of significant coefficients and context based arithmetic coding. The algorithm is able to exploit both intra-band and inter-band statistical dependencies among wavelet significant coefficients. Moreover, the same approach is used both for two and three-dimensional wavelet-based image compression. Finally we the algorithms are tested on some 2D images and on a medical volume, by comparing the RD results to those obtained with the state-of-the-art wavelet-based coders.

  3. 2-d Finite Element Code Postprocessor

    SciTech Connect

    Sanford, L. A.; Hallquist, J. O.

    1996-07-15

    ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forces along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.

  4. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M.; Wehlburg, Christine M.; Wehlburg, Joseph C.; Smith, Mark W.; Smith, Jody L.

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  5. SPECT Imaging of 2-D and 3-D Distributed Sources with Near-Field Coded Aperture Collimation: Computer Simulation and Real Data Validation.

    PubMed

    Mu, Zhiping; Dobrucki, Lawrence W; Liu, Yi-Hwa

    The imaging of distributed sources with near-field coded aperture (CA) remains extremely challenging and is broadly considered unsuitable for single-photon emission computerized tomography (SPECT). This study proposes a novel CA SPECT reconstruction approach and evaluates the feasibilities of imaging and reconstructing distributed hot sources and cold lesions using near-field CA collimation and iterative image reconstruction. Computer simulations were designed to compare CA and pinhole collimations in two-dimensional radionuclide imaging. Digital phantoms were created and CA images of the phantoms were reconstructed using maximum likelihood expectation maximization (MLEM). Errors and the contrast-to-noise ratio (CNR) were calculated and image resolution was evaluated. An ex vivo rat heart with myocardial infarction was imaged using a micro-SPECT system equipped with a custom-made CA module and a commercial 5-pinhole collimator. Rat CA images were reconstructed via the three-dimensional (3-D) MLEM algorithm developed for CA SPECT with and without correction for a large projection angle, and 5-pinhole images were reconstructed using the commercial software provided by the SPECT system. Phantom images of CA were markedly improved in terms of image quality, quantitative root-mean-squared error, and CNR, as compared to pinhole images. CA and pinhole images yielded similar image resolution, while CA collimation resulted in fewer noise artifacts. CA and pinhole images of the rat heart were well reconstructed and the myocardial perfusion defects could be clearly discerned from 3-D CA and 5-pinhole SPECT images, whereas 5-pinhole SPECT images suffered from severe noise artifacts. Image contrast of CA SPECT was further improved after correction for the large projection angle used in the rat heart imaging. The computer simulations and small-animal imaging study presented herein indicate that the proposed 3-D CA SPECT imaging and reconstruction approaches worked reasonably

  6. 2D microwave imaging reflectometer electronics

    SciTech Connect

    Spear, A. G.; Domier, C. W. Hu, X.; Muscatello, C. M.; Ren, X.; Luhmann, N. C.; Tobias, B. J.

    2014-11-15

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  7. 2D microwave imaging reflectometer electronics.

    PubMed

    Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C

    2014-11-01

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  8. Validation and testing of the VAM2D computer code

    SciTech Connect

    Kool, J.B.; Wu, Y.S. )

    1991-10-01

    This document describes two modeling studies conducted by HydroGeoLogic, Inc. for the US NRC under contract no. NRC-04089-090, entitled, Validation and Testing of the VAM2D Computer Code.'' VAM2D is a two-dimensional, variably saturated flow and transport code, with applications for performance assessment of nuclear waste disposal. The computer code itself is documented in a separate NUREG document (NUREG/CR-5352, 1989). The studies presented in this report involve application of the VAM2D code to two diverse subsurface modeling problems. The first one involves modeling of infiltration and redistribution of water and solutes in an initially dry, heterogeneous field soil. This application involves detailed modeling over a relatively short, 9-month time period. The second problem pertains to the application of VAM2D to the modeling of a waste disposal facility in a fractured clay, over much larger space and time scales and with particular emphasis on the applicability and reliability of using equivalent porous medium approach for simulating flow and transport in fractured geologic media. Reflecting the separate and distinct nature of the two problems studied, this report is organized in two separate parts. 61 refs., 31 figs., 9 tabs.

  9. ORION96. 2-d Finite Element Code Postprocessor

    SciTech Connect

    Sanford, L.A.; Hallquist, J.O.

    1992-02-02

    ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forces along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.

  10. ELLIPT2D: A Flexible Finite Element Code Written Python

    SciTech Connect

    Pletzer, A.; Mollis, J.C.

    2001-03-22

    The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research.

  11. Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.

    2009-01-01

    VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.

  12. PiCode: A New Picture-Embedding 2D Barcode.

    PubMed

    Chen, Changsheng; Huang, Wenjian; Zhou, Baojian; Liu, Chenchen; Mow, Wai Ho

    2016-08-01

    Nowadays, 2D barcodes have been widely used as an interface to connect potential customers and advertisement contents. However, the appearance of a conventional 2D barcode pattern is often too obtrusive for integrating into an aesthetically designed advertisement. Besides, no human readable information is provided before the barcode is successfully decoded. This paper proposes a new picture-embedding 2D barcode, called PiCode, which mitigates these two limitations by equipping a scannable 2D barcode with a picturesque appearance. PiCode is designed with careful considerations on both the perceptual quality of the embedded image and the decoding robustness of the encoded message. Comparisons with the existing beautified 2D barcodes show that PiCode achieves one of the best perceptual qualities for the embedded image, and maintains a better tradeoff between image quality and decoding robustness in various application conditions. PiCode has been implemented in the MATLAB on a PC and some key building blocks have also been ported to Android and iOS platforms. Its practicality for real-world applications has been successfully demonstrated.

  13. 2D FEM Heat Transfer & E&M Field Code

    SciTech Connect

    1992-04-02

    TOPAZ and TOPAZ2D are two-dimensional implicit finite element computer codes for heat transfer analysis. TOPAZ2D can also be used to solve electrostatic and magnetostatic problems. The programs solve for the steady-state or transient temperature or electrostatic and magnetostatic potential field on two-dimensional planar or axisymmetric geometries. Material properties may be temperature or potential-dependent and either isotropic or orthotropic. A variety of time and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functional representation of boundary conditions and internal heat generation. The programs can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.

  14. CFD code comparison for 2D airfoil flows

    NASA Astrophysics Data System (ADS)

    Sørensen, Niels N.; Méndez, B.; Muñoz, A.; Sieros, G.; Jost, E.; Lutz, T.; Papadakis, G.; Voutsinas, S.; Barakos, G. N.; Colonia, S.; Baldacchino, D.; Baptista, C.; Ferreira, C.

    2016-09-01

    The current paper presents the effort, in the EU AVATAR project, to establish the necessary requirements to obtain consistent lift over drag ratios among seven CFD codes. The flow around a 2D airfoil case is studied, for both transitional and fully turbulent conditions at Reynolds numbers of 3 × 106 and 15 × 106. The necessary grid resolution, domain size, and iterative convergence criteria to have consistent results are discussed, and suggestions are given for best practice. For the fully turbulent results four out of seven codes provide consistent results. For the laminar-turbulent transitional results only three out of seven provided results, and the agreement is generally lower than for the fully turbulent case.

  15. A 2D histogram representation of images for pooling

    NASA Astrophysics Data System (ADS)

    Yu, Xinnan; Zhang, Yu-Jin

    2011-03-01

    Designing a suitable image representation is one of the most fundamental issues of computer vision. There are three steps in the popular Bag of Words based image representation: feature extraction, coding and pooling. In the final step, current methods make an M x K encoded feature matrix degraded to a K-dimensional vector (histogram), where M is the number of features, and K is the size of the codebook: information is lost dramatically here. In this paper, a novel pooling method, based on 2-D histogram representation, is proposed to retain more information from the encoded image features. This pooling method can be easily incorporated into state-of- the-art computer vision system frameworks. Experiments show that our approach improves current pooling methods, and can achieve satisfactory performance of image classification and image reranking even when using a small codebook and costless linear SVM.

  16. Field depth extension of 2D barcode scanner based on wavefront coding and projection algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Tingyu; Ye, Zi; Zhang, Wenzi; Huang, Weiwei; Yu, Feihong

    2008-03-01

    Wavefront coding (WFC) used in 2D barcode scanners can extend the depth of field into a great extent with simpler structure compared to the autofocus microscope system. With a cubic phase mask (CPM) employed in the STOP, blurred images will be obtained in charge coupled device (CCD), which can be restored by digital filters. Direct methods are used widely in real-time restoration with good computational efficiency but with details smoothed. Here, the results of direct method are firstly filtered by hard-threshold function. The positions of the steps can be detected by simple differential operators. With the positions corrected by projection algorithm, the exact barcode information is restored. A wavefront coding system with 7mm effective focal length and 6 F-number is designed as an example. Although with the different magnification, images of different object distances can be restored by one point spread function (PSF) with 200mm object distance. A QR code (Quickly Response Code) of 31mm X 27mm is used as a target object. The simulation results showed that the sharp imaging objective distance is from 80mm to 355mm. The 2D barcode scanner with wavefront coding extends field depth with simple structure, low cost and large manufacture tolerance. This combination of the direct filter and projection algorithm proposed here could get the exact 2D barcode information with good computational efficiency.

  17. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  18. Embedded foveation image coding.

    PubMed

    Wang, Z; Bovik, A C

    2001-01-01

    The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.

  19. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  20. Numerical solution to the Vlasov equation: The 2D code

    NASA Astrophysics Data System (ADS)

    Fijalkow, Eric

    1999-02-01

    The present code solves the two-dimensional Vlasov equation for a periodic in space system, in presence of an external magnetic field B O. The self coherent electric field given by Poisson equation is computed by Fast Fourier Transform (FFT). The output of the code consist of a list of diagnostics, such as total mass conservation, total momentum and energies, and of projections of the distribution function in different subspaces as the x- v x space, the x- y space and so on.

  1. Numerical modelling of spallation in 2D hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Maw, J. R.; Giles, A. R.

    1996-05-01

    A model for spallation based on the void growth model of Johnson has been implemented in 2D Lagrangian and Eulerian hydrocodes. The model has been extended to treat complete separation of material when voids coalesce and to describe the effects of elevated temperatures and melting. The capabilities of the model are illustrated by comparison with data from explosively generated spall experiments. Particular emphasis is placed on the prediction of multiple spall effects in weak, low melting point, materials such as lead. The correlation between the model predictions and observations on the strain rate dependence of spall strength is discussed.

  2. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  3. Available information in 2D motional Stark effect imaging.

    PubMed

    Creese, Mathew; Howard, John

    2010-10-01

    Recent advances in imaging techniques have allowed the extension of the standard polarimetric 1D motional Stark effect (MSE) diagnostic to 2D imaging of the internal magnetic field of fusion devices [J. Howard, Plasma Phys. Controlled Fusion 50, 125003 (2008)]. This development is met with the challenge of identifying and extracting the new information, which can then be used to increase the accuracy of plasma equilibrium and current density profile determinations. This paper develops a 2D analysis of the projected MSE polarization orientation and Doppler phase shift. It is found that, for a standard viewing position, the 2D MSE imaging system captures sufficient information to allow imaging of the internal vertical magnetic field component B(Z)(r,z) in a tokamak.

  4. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  5. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  6. CAST2D: A finite element computer code for casting process modeling

    SciTech Connect

    Shapiro, A.B.; Hallquist, J.O.

    1991-10-01

    CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).

  7. 2D Orthogonal Locality Preserving Projection for Image Denoising.

    PubMed

    Shikkenawis, Gitam; Mitra, Suman K

    2016-01-01

    Sparse representations using transform-domain techniques are widely used for better interpretation of the raw data. Orthogonal locality preserving projection (OLPP) is a linear technique that tries to preserve local structure of data in the transform domain as well. Vectorized nature of OLPP requires high-dimensional data to be converted to vector format, hence may lose spatial neighborhood information of raw data. On the other hand, processing 2D data directly, not only preserves spatial information, but also improves the computational efficiency considerably. The 2D OLPP is expected to learn the transformation from 2D data itself. This paper derives mathematical foundation for 2D OLPP. The proposed technique is used for image denoising task. Recent state-of-the-art approaches for image denoising work on two major hypotheses, i.e., non-local self-similarity and sparse linear approximations of the data. Locality preserving nature of the proposed approach automatically takes care of self-similarity present in the image while inferring sparse basis. A global basis is adequate for the entire image. The proposed approach outperforms several state-of-the-art image denoising approaches for gray-scale, color, and texture images.

  8. Numerical Simulation of Supersonic Compression Corners and Hypersonic Inlet Flows Using the RPLUS2D Code

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.

  9. Real-time 2-D temperature imaging using ultrasound.

    PubMed

    Liu, Dalong; Ebbini, Emad S

    2010-01-01

    We have previously introduced methods for noninvasive estimation of temperature change using diagnostic ultrasound. The basic principle was validated both in vitro and in vivo by several groups worldwide. Some limitations remain, however, that have prevented these methods from being adopted in monitoring and guidance of minimally invasive thermal therapies, e.g., RF ablation and high-intensity-focused ultrasound (HIFU). In this letter, we present first results from a real-time system for 2-D imaging of temperature change using pulse-echo ultrasound. The front end of the system is a commercially available scanner equipped with a research interface, which allows the control of imaging sequence and access to the RF data in real time. A high-frame-rate 2-D RF acquisition mode, M2D, is used to capture the transients of tissue motion/deformations in response to pulsed HIFU. The M2D RF data is streamlined to the back end of the system, where a 2-D temperature imaging algorithm based on speckle tracking is implemented on a graphics processing unit. The real-time images of temperature change are computed on the same spatial and temporal grid of the M2D RF data, i.e., no decimation. Verification of the algorithm was performed by monitoring localized HIFU-induced heating of a tissue-mimicking elastography phantom. These results clearly demonstrate the repeatability and sensitivity of the algorithm. Furthermore, we present in vitro results demonstrating the possible use of this algorithm for imaging changes in tissue parameters due to HIFU-induced lesions. These results clearly demonstrate the value of the real-time data streaming and processing in monitoring, and guidance of minimally invasive thermotherapy.

  10. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  11. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  12. Coded source neutron imaging

    NASA Astrophysics Data System (ADS)

    Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken

    2011-03-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.

  13. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  14. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  15. TOPAZ2D heat transfer code users manual and thermal property data base

    SciTech Connect

    Shapiro, A.B.; Edwards, A.L.

    1990-05-01

    TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependent boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.

  16. Multiresolution image representation using combined 2-D and 1-D directional filter banks.

    PubMed

    Tanaka, Yuichi; Ikehara, Masaaki; Nguyen, Truong Q

    2009-02-01

    In this paper, effective multiresolution image representations using a combination of 2-D filter bank (FB) and directional wavelet transform (WT) are presented. The proposed methods yield simple implementation and low computation costs compared to previous 1-D and 2-D FB combinations or adaptive directional WT methods. Furthermore, they are nonredundant transforms and realize quad-tree like multiresolution representations. In applications on nonlinear approximation, image coding, and denoising, the proposed filter banks show visual quality improvements and have higher PSNR than the conventional separable WT or the contourlet.

  17. Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

    DTIC Science & Technology

    2016-06-01

    Modeling Codes Co as ta l a nd H yd ra ul ic s La bo ra to ry Hwai-Ping Cheng, Stephen M. England, and Clarissa M. Murray June 2016...Flood & Coastal Storm Damage Reduction Program ERDC/CHL TR-16-6 June 2016 Seepage and Piping through Levees and Dikes Using 2D and 3D Modeling Codes ...TYPE Final Report 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Seepage and Piping through Levees and Dikes using 2D and 3D Modeling Codes

  18. Targeted fluorescence imaging enhanced by 2D materials: a comparison between 2D MoS2 and graphene oxide.

    PubMed

    Xie, Donghao; Ji, Ding-Kun; Zhang, Yue; Cao, Jun; Zheng, Hu; Liu, Lin; Zang, Yi; Li, Jia; Chen, Guo-Rong; James, Tony D; He, Xiao-Peng

    2016-08-04

    Here we demonstrate that 2D MoS2 can enhance the receptor-targeting and imaging ability of a fluorophore-labelled ligand. The 2D MoS2 has an enhanced working concentration range when compared with graphene oxide, resulting in the improved imaging of both cell and tissue samples.

  19. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  20. Volumetric elasticity imaging with a 2-D CMUT array.

    PubMed

    Fisher, Ted G; Hall, Timothy J; Panda, Satchi; Richards, Michael S; Barbone, Paul E; Jiang, Jingfeng; Resnick, Jeff; Barnes, Steve

    2010-06-01

    This article reports the use of a two-dimensional (2-D) capacitive micro-machined ultrasound transducer (CMUT) to acquire radio-frequency (RF) echo data from relatively large volumes of a simple ultrasound phantom to compare three-dimensional (3-D) elasticity imaging methods. Typical 2-D motion tracking for elasticity image formation was compared with three different methods of 3-D motion tracking, with sum-squared difference (SSD) used as the similarity measure. Differences among the algorithms were the degree to which they tracked elevational motion: not at all (2-D search), planar search, combination of multiple planes and plane independent guided search. The cross-correlation between the predeformation and motion-compensated postdeformation RF echo fields was used to quantify motion tracking accuracy. The lesion contrast-to-noise ratio was used to quantify image quality. Tracking accuracy and strain image quality generally improved with increased tracking sophistication. When used as input for a 3-D modulus reconstruction, high quality 3-D displacement estimates yielded accurate and low noise modulus reconstruction.

  1. Volumetric Elasticity Imaging with a 2D CMUT Array

    PubMed Central

    Fisher, Ted G.; Hall, Timothy J.; Panda, Satchi; Richards, Michael S.; Barbone, Paul E.; Jiang, Jingfeng; Resnick, Jeff; Barnes, Steve

    2010-01-01

    This paper reports the use of a two-dimensional (2D) capacitive micro-machined ultrasound transducer (CMUT) to acquire radio frequency (RF) echo data from relatively large volumes of a simple ultrasound phantom to compare 3D elasticity imaging methods. Typical 2D motion tracking for elasticity image formation was compared to three different methods of 3D motion tracking, with sum-squared difference (SSD) used as the similarity measure. Differences among the algorithms were the degree to which they tracked elevational motion: not at all (2D search), planar search, combination of multiple planes, and plane independent guided search. The cross correlation between the pre-deformation and motion-compensated post-deformation RF echo fields was used to quantify motion tracking accuracy. The lesion contrast-to-noise ratio was used to quantify image quality. Tracking accuracy and strain image quality generally improved with increased tracking sophistication. When used as input for a 3D modulus reconstruction, high quality 3D displacement estimates yielded accurate and low noise modulus reconstruction. PMID:20510188

  2. Volume Calculation of Venous Thrombosis Using 2D Ultrasound Images.

    PubMed

    Dhibi, M; Puentes, J; Bressollette, L; Guias, B; Solaiman, B

    2005-01-01

    Venous thrombosis screening exams use 2D ultrasound images, from which medical experts obtain a rough idea of the thrombosis aspect and infer an approximate volume. Such estimation is essential to follow up the thrombosis evolution. This paper proposes a method to calculate venous thrombosis volume from non-parallel 2D ultrasound images, taking advantage of a priori knowledge about the thrombosis shape. An interactive ellipse fitting contour segmentation extracts the 2D thrombosis contours. Then, a Delaunay triangulation is applied to the set of 2D segmented contours positioned in 3D, and the area that each contour defines, to obtain a global thrombosis 3D surface reconstruction, with a dense triangulation inside the contours. Volume is calculated from the obtained surface and contours triangulation, using a maximum unit normal component approach. Preliminary results obtained on 3 plastic phantoms and 3 in vitro venous thromboses, as well as one in vivo case are presented and discussed. An error rate of volume estimation inferior to 4,5% for the plastic phantoms, and 3,5% for the in vitro venous thromboses was obtained.

  3. Development of an Implicit, Charge and Energy Conserving 2D Electromagnetic PIC Code on Advanced Architectures

    NASA Astrophysics Data System (ADS)

    Payne, Joshua; Taitano, William; Knoll, Dana; Liebs, Chris; Murthy, Karthik; Feltman, Nicolas; Wang, Yijie; McCarthy, Colleen; Cieren, Emanuel

    2012-10-01

    In order to solve problems such as the ion coalescence and slow MHD shocks fully kinetically we developed a fully implicit 2D energy and charge conserving electromagnetic PIC code, PlasmaApp2D. PlasmaApp2D differs from previous implicit PIC implementations in that it will utilize advanced architectures such as GPUs and shared memory CPU systems, with problems too large to fit into cache. PlasmaApp2D will be a hybrid CPU-GPU code developed primarily to run on the DARWIN cluster at LANL utilizing four 12-core AMD Opteron CPUs and two NVIDIA Tesla GPUs per node. MPI will be used for cross-node communication, OpenMP will be used for on-node parallelism, and CUDA will be used for the GPUs. Development progress and initial results will be presented.

  4. Optical CDMA system using 2-D run-length limited code

    NASA Astrophysics Data System (ADS)

    Liu, Maw-Yang; Jiang, Joe-Air

    2010-10-01

    In this paper, time-spreading wavelength-hopping optical CDMA system using 2-D run-length limited code is investigated. The run-length limited code we use here is predicated upon spatial coding scheme, which can improve system performance significantly. In our proposed system, we employ carrier-hopping prime code and its shifted version as signature sequences. Based on the zero auto-correlation sidelobes property of signature sequence, we propose a two-state trellis coding architecture, which utilizes 2-D parallel detection scheme. The proposed scheme is compact and simple that can be applied to more complicated trellis to further enhance system performance. Multiple access interference is the main deterioration factor in optical CDMA system that affects system performance adversely. Aside from the multiple access interference, some of the adverse impacts of system performance are also taken into consideration, which include thermal noise, shot noise, relative intensity noise, and beat noise.

  5. Bayesian 2D Current Reconstruction from Magnetic Images

    NASA Astrophysics Data System (ADS)

    Clement, Colin B.; Bierbaum, Matthew K.; Nowack, Katja; Sethna, James P.

    We employ a Bayesian image reconstruction scheme to recover 2D currents from magnetic flux imaged with scanning SQUIDs (Superconducting Quantum Interferometric Devices). Magnetic flux imaging is a versatile tool to locally probe currents and magnetic moments, however present reconstruction methods sacrifice resolution due to numerical instability. Using state-of-the-art blind deconvolution techniques we recover the currents, point-spread function and height of the SQUID loop by optimizing the probability of measuring an image. We obtain uncertainties on these quantities by sampling reconstructions. This generative modeling technique could be used to develop calibration protocols for scanning SQUIDs, to diagnose systematic noise in the imaging process, and can be applied to many tools beyond scanning SQUIDs.

  6. Microwave Imaging with Infrared 2-D Lock-in Amplifier

    NASA Astrophysics Data System (ADS)

    Chiyo, Noritaka; Arai, Mizuki; Tanaka, Yasuhiro; Nishikata, Atsuhiro; Maeno, Takashi

    We have developed a 3-D electromagnetic field measurement system using 2-D lock-in amplifier. This system uses an amplitude modulated electromagnetic wave source to heat a resistive screen. A very small change of temperature on a screen illuminated with the modulated electromagnetic wave is measured using an infrared thermograph camera. In this paper, we attempted to apply our system to microwave imaging. By placing conductor patches in front of the resistive screen and illuminating with microwave, the shape of each conductor was clearly observed as the temperature difference image of the screen. In this way, the conductor pattern inside the non-contact type IC card could be visualized. Moreover, we could observe the temperature difference image reflecting the shape of a Konnyaku (a gelatinous food made from devil's-tonge starch) or a dried fishbone, both as non-conducting material resembling human body. These results proved that our method is applicable to microwave see-through imaging.

  7. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  8. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  9. 2-D Drift Velocities from the IMAGE EUV Plasmaspheric Imager

    NASA Technical Reports Server (NTRS)

    Gallagher, D.; Adrian, M.

    2007-01-01

    The IMAGE Mission extreme ultraviolet imager (EUY) observes He+ plasmaspheric ions throughout the inner magnetosphere. Limited by ionizing radiation and viewing close to the Sun, images of the He+ distribution are available every 10 minutes for many hours as the spacecraft passes through apogee in its highly elliptical orbit. As a consistent constituent at about 15%, He+ is an excellent surrogate for monitoring all of the processes that control the dynamics of plasmaspheric plasma. In particular, the motion ofHe+ transverse to the ambient magnetic field is a direct indication of convective electric fields. The analysis of boundary motions has already achieved new insights into the electrodynamic coupling processes taking place between energetic magnetospheric plasmas and the ionosphere. Yet to be fulfilled, however, is the original promise that global EUY images of the plasmasphere might yield two-dimensional pictures of meso-scale to macro-scale electric fields in the inner magnetosphere. This work details the technique and initial application of an IMAGE EUY analysis that appears capable of following thermal plasma motion on a global basis.

  10. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  11. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  12. Joint 2D and 3D phase processing for quantitative susceptibility mapping: application to 2D echo-planar imaging.

    PubMed

    Wei, Hongjiang; Zhang, Yuyao; Gibbs, Eric; Chen, Nan-Kuei; Wang, Nian; Liu, Chunlei

    2017-04-01

    Quantitative susceptibility mapping (QSM) measures tissue magnetic susceptibility and typically relies on time-consuming three-dimensional (3D) gradient-echo (GRE) MRI. Recent studies have shown that two-dimensional (2D) multi-slice gradient-echo echo-planar imaging (GRE-EPI), which is commonly used in functional MRI (fMRI) and other dynamic imaging techniques, can also be used to produce data suitable for QSM with much shorter scan times. However, the production of high-quality QSM maps is difficult because data obtained by 2D multi-slice scans often have phase inconsistencies across adjacent slices and strong susceptibility field gradients near air-tissue interfaces. To address these challenges in 2D EPI-based QSM studies, we present a new data processing procedure that integrates 2D and 3D phase processing. First, 2D Laplacian-based phase unwrapping and 2D background phase removal are performed to reduce phase inconsistencies between slices and remove in-plane harmonic components of the background phase. This is followed by 3D background phase removal for the through-plane harmonic components. The proposed phase processing was evaluated with 2D EPI data obtained from healthy volunteers, and compared against conventional 3D phase processing using the same 2D EPI datasets. Our QSM results were also compared with QSM values from time-consuming 3D GRE data, which were taken as ground truth. The experimental results show that this new 2D EPI-based QSM technique can produce quantitative susceptibility measures that are comparable with those of 3D GRE-based QSM across different brain regions (e.g. subcortical iron-rich gray matter, cortical gray and white matter). This new 2D EPI QSM reconstruction method is implemented within STI Suite, which is a comprehensive shareware for susceptibility imaging and quantification. Copyright © 2016 John Wiley & Sons, Ltd.

  13. 2-D fluorescence lifetime imaging using a time-gated image intensifier

    NASA Astrophysics Data System (ADS)

    Dowling, K.; Hyde, S. C. W.; Dainty, J. C.; French, P. M. W.; Hares, J. D.

    1997-02-01

    We report a 2-D fluorescence lifetime imaging system based on a time-gated image intensifier and a Cr:LiSAF regenerative amplifier. We have demonstrated 185 ps temporal resolution. The deleterious effects of optical scattering are demonstrated.

  14. Tracking of deformable target in 2D ultrasound images

    NASA Astrophysics Data System (ADS)

    Royer, Lucas; Marchal, Maud; Le Bras, Anthony; Dardenne, Guillaume; Krupa, Alexandre

    2015-03-01

    In this paper, we propose a novel approach for automatically tracking deformable target within 2D ultrasound images. Our approach uses only dense information combined with a physically-based model and has therefore the advantage of not using any fiducial marker nor a priori knowledge on the anatomical environment. The physical model is represented by a mass-spring damper system driven by different types of forces where the external forces are obtained by maximizing image similarity metric between a reference target and a deformed target across the time. This deformation is represented by a parametric warping model where the optimal parameters are estimated from the intensity variation. This warping function is well-suited to represent localized deformations in the ultrasound images because it directly links the forces applied on each mass with the motion of all the pixels in its vicinity. The internal forces constrain the deformation to physically plausible motions, and reduce the sensitivity to the speckle noise. The approach was validated on simulated and real data, both for rigid and free-form motions of soft tissues. The results are very promising since the deformable target could be tracked with a good accuracy for both types of motion. Our approach opens novel possibilities for computer-assisted interventions where deformable organs are involved and could be used as a new tool for interactive tracking of soft tissues in ultrasound images.

  15. A scanning-mode 2D shear wave imaging (s2D-SWI) system for ultrasound elastography.

    PubMed

    Qiu, Weibao; Wang, Congzhi; Li, Yongchuan; Zhou, Juan; Yang, Ge; Xiao, Yang; Feng, Ge; Jin, Qiaofeng; Mu, Peitian; Qian, Ming; Zheng, Hairong

    2015-09-01

    Ultrasound elastography is widely used for the non-invasive measurement of tissue elasticity properties. Shear wave imaging (SWI) is a quantitative method for assessing tissue stiffness. SWI has been demonstrated to be less operator dependent than quasi-static elastography, and has the ability to acquire quantitative elasticity information in contrast with acoustic radiation force impulse (ARFI) imaging. However, traditional SWI implementations cannot acquire two dimensional (2D) quantitative images of the tissue elasticity distribution. This study proposes and evaluates a scanning-mode 2D SWI (s2D-SWI) system. The hardware and image processing algorithms are presented in detail. Programmable devices are used to support flexible control of the system and the image processing algorithms. An analytic signal based cross-correlation method and a Radon transformation based shear wave speed determination method are proposed, which can be implemented using parallel computation. Imaging of tissue mimicking phantoms, and in vitro, and in vivo imaging test are conducted to demonstrate the performance of the proposed system. The s2D-SWI system represents a new choice for the quantitative mapping of tissue elasticity, and has great potential for implementation in commercial ultrasound scanners.

  16. Detection of Leptomeningeal Metastasis by Contrast-Enhanced 3D T1-SPACE: Comparison with 2D FLAIR and Contrast-Enhanced 2D T1-Weighted Images

    PubMed Central

    Gil, Bomi; Hwang, Eo-Jin; Lee, Song; Jang, Jinhee; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo

    2016-01-01

    Introduction To compare the diagnostic accuracy of contrast-enhanced 3D(dimensional) T1-weighted sampling perfection with application-optimized contrasts by using different flip angle evolutions (T1-SPACE), 2D fluid attenuated inversion recovery (FLAIR) images and 2D contrast-enhanced T1-weighted image in detection of leptomeningeal metastasis except for invasive procedures such as a CSF tapping. Materials and Methods Three groups of patients were included retrospectively for 9 months (from 2013-04-01 to 2013-12-31). Group 1 patients with positive malignant cells in CSF cytology (n = 22); group 2, stroke patients with steno-occlusion in ICA or MCA (n = 16); and group 3, patients with negative results on MRI, whose symptom were dizziness or headache (n = 25). A total of 63 sets of MR images are separately collected and randomly arranged: (1) CE 3D T1-SPACE; (2) 2D FLAIR; and (3) CE T1-GRE using a 3-Tesla MR system. A faculty neuroradiologist with 8-year-experience and another 2nd grade trainee in radiology reviewed each MR image- blinded by the results of CSF cytology and coded their observations as positives or negatives of leptomeningeal metastasis. The CSF cytology result was considered as a gold standard. Sensitivity and specificity of each MR images were calculated. Diagnostic accuracy was compared using a McNemar’s test. A Cohen's kappa analysis was performed to assess inter-observer agreements. Results Diagnostic accuracy was not different between 3D T1-SPACE and CSF cytology by both raters. However, the accuracy test of 2D FLAIR and 2D contrast-enhanced T1-weighted GRE was inconsistent by the two raters. The Kappa statistic results were 0.657 (3D T1-SPACE), 0.420 (2D FLAIR), and 0.160 (2D contrast-enhanced T1-weighted GRE). The 3D T1-SPACE images showed the highest inter-observer agreements between the raters. Conclusions Compared to 2D FLAIR and 2D contrast-enhanced T1-weighted GRE, contrast-enhanced 3D T1 SPACE showed a better detection rate of

  17. 2D ERT imaging of tracer dispersion in laboratory experiments

    NASA Astrophysics Data System (ADS)

    Lekmine, G.; Pessel, M.; Auradou, H.

    2009-12-01

    Electrical resistivity tomography applied in cross-borehole is a method often used to follow the invasion process of pollutants. The aim of this work is to test experimentally the electrode arrays and inversion processes used to obtain a spatial representation of tracer propagation in porous media. Experiments were conducted in a plexiglass container with glass beads of 166 microns in diameter. The height of the container is 275 mm, its width 85 mm and its thickness 10 mm. 21 electrodes, equally spaced, are placed along each of the lateral sides of the porous medium : these electrodes are used to perform the electrical measurements. The device is lightened from behind and a video camera records the fluid propagation. The tracer (i.e the pollutant) is a water solution containing a known amount of dye together with NaCl (0.5g/l up to 1.5g/l). The medium is first saturated by a water solution containing a slight concentration of NaCl so that its density is smaller than the tracer’s. An upward flow is first established, the denser fluid is injected at the bottom and over the full width of the medium. In this way, the flow is stabilized by gravity avoiding the development of unstable fingers. Still, the fluids are miscible and a mixing front develops during the flow: in the present study, the interest is to estimate the 2D tracer front dispersion by both optical and electrical imaging. The comparison of the two techniques allows to study the ability of the inversion process to quantify the solute transport. A sensitivity analysis is led in order to determine the best measurement sequence to monitor the tracer’s front evolution through the entire volume of the medium. Hence, each time step is constituted by the same 190 transverse dipole-dipole set of lasting 5 minutes between the first and the last measurement. At the laboratory scale, the experimental design affects the measurements through edges effects: most of these artefacts can be partially suppressed by using

  18. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.

  19. A New Family of 2-D Optical Orthogonal Codes and Analysis of Its Performance in Optical CDMA Access Networks

    NASA Astrophysics Data System (ADS)

    Shurong, Sun; Yin, Hongxi; Wang, Ziyu; Xu, Anshi

    2006-04-01

    A new family of two-dimensional optical orthogonal code (2-D OOC), one-coincidence frequency hop code (OCFHC)/OOC, which employs OCFHC and OOC as wavelengthhopping and time-spreading patterns, respectively, is proposed in this paper. In contrary to previously constructed 2-D OOCs, OCFHC/OOC provides more choices on the number of available wavelengths and its cardinality achieves the upper bound in theory without sacrificing good auto-and-cross correlation properties, i.e., the correlation properties of the code is still ideal. Meanwhile, we utilize a new method, called effective normalized throughput, to compare the performance of diverse codes applicable to optical code division multiple access (OCDMA) systems besides conventional measure bit error rate, and the results indicate that our code performs better than obtained OCDMA codes and is truly applicable to OCDMA networks as multiaccess codes and will greatly facilitate the implementation of OCDMA access networks.

  20. Transport simulations of the C-2 and C-2U Field Reversed Configurations with the Q2D code

    NASA Astrophysics Data System (ADS)

    Onofri, Marco; Dettrick, Sean; Barnes, Daniel; Tajima, Toshiki; TAE Team

    2016-10-01

    The Q2D code is a 2D MHD code, which includes a neutral fluid and separate ion and electron temperatures, coupled with a 3D Monte Carlo code, which is used to calculate source terms due to neutral beams. Q2D has been benchmarked against the 1D transport code Q1D and is used to simulate the evolution of the C-2 and C-2U field reversed configuration experiments [1]. Q2D simulations start from an initial equilibrium and transport coefficients are chosen to match C-2 experimental data. C-2U is an upgrade of C-2, with more beam power and angled beam injection, which demonstrates plasma sustainment for 5 + ms. The simulations use the same transport coefficients for C-2 and C-2U, showing the formation of a steady state in C-2U, sustained by fast ion pressure and current drive.

  1. Computation of nozzle flow fields using the PARC2D Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Collins, Frank G.

    1986-01-01

    Supersonic nozzles which operate at low Reynolds numbers and have large expansion ratios have very thick boundary layers at their exit. This leads to a very strong viscous/inviscid interaction upon the flow within the nozzle and the traditional nozzle design techniques which correct the inviscid core with a boundary layer displacement do not accurately predict the nozzle exit conditions. A full Navier-Stokes code (PARC2D) was used to compute the nozzle flow field. Grids were generated using the interactive grid generator code TBGG. All computations were made on the NASA MSFC CRAY X-MP computer. Comparison was made between the computations and in-house wall pressure measurements for CO2 flow through a conical nozzle having an area ratio of 40. Satisfactory agreement existed between the computations and measurements for a stagnation pressure of 29.4 psia and stagnation temperature of 1060 R. However, agreement did not exist at a stagnation pressure of 7.4 psia. Several reasons for the lack of agreement are possible. The computational code assumed a constant gas gamma whereas gamma for CO2 varied from 1.22 in the plenum chamber to 1.38 at the nozzle exit. Finally, it is possible that condensation occurred during the expansion at the lower stagnation pressure.

  2. Numerical Instability in a 2D Gyrokinetic Code Caused by Divergent E × B Flow

    NASA Astrophysics Data System (ADS)

    Byers, J. A.; Dimits, A. M.; Matsuda, Y.; Langdon, A. B.

    1994-12-01

    In this paper, a numerical instability first observed in a 2D electrostatic gyrokinetic code is described. The instability should also be present in some form in many versons of particle-in-cell simulation codes that employ guiding center drifts. A perturbation analysis of the instability is given and its results agree quantitatively with the observations from the gyrokinetic code in all respects. The basic mechanism is a false divergence of the E × B flow caused by the interpolation between the grid and the particles as coupled with the specific numerical method for calculating E - ∇φ. Stability or instability depends in detail on the specific choice of particle interpolation method and field method. One common interpolation method, subtracted dipole, is stable. Other commonly used interpolation methods, linear and quadratic, are unstable when combined with a finite difference for the electric field. Linear and quadratic interpolation can be rendered stable if combined with another method for the electric field, the analytic differential of the interpolated potential.

  3. Position coding effects in a 2D scenario: the case of musical notation.

    PubMed

    Perea, Manuel; García-Chamorro, Cristina; Centelles, Arnau; Jiménez, María

    2013-07-01

    How does the cognitive system encode the location of objects in a visual scene? In the past decade, this question has attracted much attention in the field of visual-word recognition (e.g., "jugde" is perceptually very close to "judge"). Letter transposition effects have been explained in terms of perceptual uncertainty or shared "open bigrams". In the present study, we focus on note position coding in music reading (i.e., a 2D scenario). The usual way to display music is the staff (i.e., a set of 5 horizontal lines and their resultant 4 spaces). When reading musical notation, it is critical to identify not only each note (temporal duration), but also its pitch (y-axis) and its temporal sequence (x-axis). To examine note position coding, we employed a same-different task in which two briefly and consecutively presented staves contained four notes. The experiment was conducted with experts (musicians) and non-experts (non-musicians). For the "different" trials, the critical conditions involved staves in which two internal notes that were switched vertically, horizontally, or fully transposed--as well as the appropriate control conditions. Results revealed that note position coding was only approximate at the early stages of processing and that this encoding process was modulated by expertise. We examine the implications of these findings for models of object position encoding.

  4. [Design of the 2D-FFT image reconstruction software based on Matlab].

    PubMed

    Xu, Hong-yu; Wang, Hong-zhi

    2008-09-01

    This paper presents a Matlab's implementation for 2D-FFT image reconstruction algorithm of magnetic resonance imaging, with the universal COM component that Windows system can identify. This allows to segregate the 2D-FFT image reconstruction algorithm from the business magnetic resonance imaging closed system, providing the ability for initial data processing before reconstruction, which would be important for improving the image quality, diagnostic value and image post-processing.

  5. Bi-sided integral imaging with 2D/3D convertibility using scattering polarizer.

    PubMed

    Yeom, Jiwoon; Hong, Keehoon; Park, Soon-gi; Hong, Jisoo; Min, Sung-Wook; Lee, Byoungho

    2013-12-16

    We propose a two-dimensional (2D) and three-dimensional (3D) convertible bi-sided integral imaging. The proposed system uses the polarization state of projected light for switching its operation mode between 2D and 3D modes. By using an optical module composed of two scattering polarizers and one linear polarizer, the proposed integral imaging system simultaneously provides 3D images with 2D background images for observers who are located in the front and the rear sides of the system. The occlusion effect between 2D images and 3D images is realized by using a compensation mask for 2D images and the elemental images. The principle of proposed system is experimentally verified.

  6. WE-AB-BRA-07: Quantitative Evaluation of 2D-2D and 2D-3D Image Guided Radiation Therapy for Clinical Trial Credentialing, NRG Oncology/RTOG

    SciTech Connect

    Giaddui, T; Yu, J; Xiao, Y; Jacobs, P; Manfredi, D; Linnemann, N

    2015-06-15

    Purpose: 2D-2D kV image guided radiation therapy (IGRT) credentialing evaluation for clinical trial qualification was historically qualitative through submitting screen captures of the fusion process. However, as quantitative DICOM 2D-2D and 2D-3D image registration tools are implemented in clinical practice for better precision, especially in centers that treat patients with protons, better IGRT credentialing techniques are needed. The aim of this work is to establish methodologies for quantitatively reviewing IGRT submissions based on DICOM 2D-2D and 2D-3D image registration and to test the methodologies in reviewing 2D-2D and 2D-3D IGRT submissions for RTOG/NRG Oncology clinical trials qualifications. Methods: DICOM 2D-2D and 2D-3D automated and manual image registration have been tested using the Harmony tool in MIM software. 2D kV orthogonal portal images are fused with the reference digital reconstructed radiographs (DRR) in the 2D-2D registration while the 2D portal images are fused with DICOM planning CT image in the 2D-3D registration. The Harmony tool allows alignment of the two images used in the registration process and also calculates the required shifts. Shifts calculated using MIM are compared with those submitted by institutions for IGRT credentialing. Reported shifts are considered to be acceptable if differences are less than 3mm. Results: Several tests have been performed on the 2D-2D and 2D-3D registration. The results indicated good agreement between submitted and calculated shifts. A workflow for reviewing these IGRT submissions has been developed and will eventually be used to review IGRT submissions. Conclusion: The IROC Philadelphia RTQA center has developed and tested a new workflow for reviewing DICOM 2D-2D and 2D-3D IGRT credentialing submissions made by different cancer clinical centers, especially proton centers. NRG Center for Innovation in Radiation Oncology (CIRO) and IROC RTQA center continue their collaborative efforts to enhance

  7. Simulation and calculation of particle trapping using a quasistatic 2D simulation code

    NASA Astrophysics Data System (ADS)

    Morshed, Sepehr; Antonsen, Thomas; Huang, Chengkun; Mori, Warren

    2008-11-01

    In LWFA schemes the laser pulse must propagate several centimeters and maintain its coherence over this distance, which corresponds to many Rayleigh lengths. These Wakefields and their effect on the laser can be simulated in quasistatic approximation [1, 2]. In this approximation the assumption is that the driver (laser) does not change shape during the time it takes for it to pass by a plasma particle. As a result the particles that are trapped and moving with near-luminal velocity can not be treated with this approximation. Here we have modified the 2D code WAKE with an alternate algorithm so that when a plasma particle gains sufficient energy from wakefields it is promoted to beam particle status which later on may become trapped in the wakefields of laser. Similar implementations have been made in the 3D code QUICKPIC [2]. We also have done comparison between WAKE and results from 200 TW laser simulations using OSIRIS [3]. These changes in WAKE will give users a tool that can be used on a desk top machine to simulate GeV acceleration.[0pt] [1] P. Mora and T. M. Antonsen Jr., Phys Plasma 4, 217 (1997)[0pt] [2] C. Huang et al. Comp Phys. 217 (2006)[0pt] [3] W. Lu et al. PRST, Accelerators and Beams 10, 061301 (2007)

  8. Modelling 2001 lahars at Popocatépetl volcano using FLO2D numerical code

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Capra, L.

    2013-12-01

    Popocatépetl volcano is located on the central part of the Transmexican Volcanic Belt. It is one of the most active volcanoes in Mexico and endanger more than 25 million people that lives in its surroundings. In the last months, the renewal of its volcanic activity put into alert scientific community. One of the possible scenarios is the 2001 explosive activity, which was characterized by a 8 km eruptive column and the subsequent formation of pumice flows up to 4 km from the crater. Lahars were generated few hours after, remobilizing the new deposits towards NE flank of the volcano, along Huiloac Gorge, almost reaching Santiago Xalitzintla town (Capra et al., 2004). The occurrence of a similar scenario makes very important to reproduce this event to delimitate accurately lahar hazard zones. In this work, 2001 lahar deposit is modeled using FLO2D numerical code. Geophone data is used to reconstruct initial hydrograph and sediment concentration. Sensitivity study of most important parameters used by this code like Manning, and α and β coefficients was conducted in order to achieve a good simulation. Results obtained were compared with field data and demonstrated a good agreement in thickness and flow distribution. A comparison with previously published data with laharZ program (Muñoz-Salinas, 2009) is also made. Additionally, lahars with fluctuating sediment concentrations but with similar volume are simulated to observe the influence of the rheological behavior on lahar distribution.

  9. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  10. A review of 3D/2D registration methods for image-guided interventions.

    PubMed

    Markelj, P; Tomaževič, D; Likar, B; Pernuš, F

    2012-04-01

    Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration.

  11. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    PubMed

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.

  12. Featured Image: Tests of an MHD Code

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2016-09-01

    Creating the codes that are used to numerically model astrophysical systems takes a lot of work and a lot of testing! A new, publicly available moving-mesh magnetohydrodynamics (MHD) code, DISCO, is designed to model 2D and 3D orbital fluid motion, such as that of astrophysical disks. In a recent article, DISCO creator Paul Duffell (University of California, Berkeley) presents the code and the outcomes from a series of standard tests of DISCOs stability, accuracy, and scalability.From left to right and top to bottom, the test outputs shown above are: a cylindrical Kelvin-Helmholtz flow (showing off DISCOs numerical grid in 2D), a passive scalar in a smooth vortex (can DISCO maintain contact discontinuities?), a global look at the cylindrical Kelvin-Helmholtz flow, a Jupiter-mass planet opening a gap in a viscous disk, an MHD flywheel (a test of DISCOs stability), an MHD explosion revealing shock structures, an MHD rotor (a more challenging version of the explosion), a Flock 3D MRI test (can DISCO study linear growth of the magnetorotational instability in disks?), and a nonlinear 3D MRI test.Check out the gif below for a closer look at each of these images, or follow the link to the original article to see even more!CitationPaul C. Duffell 2016 ApJS 226 2. doi:10.3847/0067-0049/226/1/2

  13. 2D Images Recorded With a Single-Sided Magnetic Particle Imaging Scanner.

    PubMed

    Grafe, Ksenija; von Gladiss, Anselm; Bringout, Gael; Ahlborg, Mandy; Buzug, Thorsten M

    2016-04-01

    Magnetic Particle Imaging is a new medical imaging modality, which detects superparamagnetic iron oxide nanoparticles. The particles are excited by magnetic fields. Most scanners have a tube-like measurement field and therefore, both the field of view and the object size are limited. A single-sided scanner has the advantage that the object is not limited in size, only the penetration depth is limited. A single-sided scanner prototype for 1D imaging has been presented in 2009. Simulations have been published for a 2D single-sided scanner and first 1D measurements have been carried out. In this paper, the first 2D single-sided scanner prototype is presented and the first calibration-based reconstruction results of measured 2D phantoms are shown. The field free point is moved on a Lissajous trajectory inside a 30 × 30 mm2 area. Images of phantoms with a maximal distance of 10 mm perpendicular to the scanner surface have been reconstructed. Different cylindrically shaped holes of phantoms have been filled with 6.28 μl undiluted Resovist. After the measurement and image reconstruction of the phantoms, particle volumes could be distinguished with a distance of 2 mm and 6 mm in vertical and horizontal direction, respectively.

  14. Investigation of fast particle driven instabilities by 2D electron cyclotron emission imaging on ASDEX Upgrade

    NASA Astrophysics Data System (ADS)

    Classen, I. G. J.; Lauber, Ph; Curran, D.; Boom, J. E.; Tobias, B. J.; Domier, C. W.; Luhmann, N. C., Jr.; Park, H. K.; Garcia Munoz, M.; Geiger, B.; Maraschek, M.; Van Zeeland, M. A.; da Graça, S.; ASDEX Upgrade Team

    2011-12-01

    Detailed measurements of the 2D mode structure of Alfvén instabilities in the current ramp-up phase of neutral beam heated discharges were performed on ASDEX Upgrade, using the electron cyclotron emission imaging (ECEI) diagnostic. This paper focuses on the observation of reversed shear Alfvén eigenmodes (RSAEs) and bursting modes that, with the use of the information from ECEI, have been identified as beta-induced Alfvén eigenmodes (BAEs). Both RSAEs with first and second radial harmonic mode structures were observed. Calculations with the linear gyro-kinetic code LIGKA revealed that the ratio of the damping rates and the frequency difference between the first and second harmonic modes strongly depended on the shape of the q-profile. The bursting character of the BAE type modes, which were radially localized to rational q surfaces, was observed to sensitively depend on the plasma parameters, ranging from strongly bursting to almost steady state.

  15. Ion cyclotron emission calculations using a 2D full wave numerical code

    NASA Astrophysics Data System (ADS)

    Batchelor, D. B.; Jaeger, E. F.; Colestock, P. L.

    1987-09-01

    Measurement of radiation in the HF band due to cyclotron emission by energetic ions produced by fusion reactions or neutral beam injection promises to be a useful diagnostic on large devices which are entering the reactor regime of operation. A number of complications make the modelling and interpretation of such measurements difficult using conventional geometrical optics methods. In particular the long wavelength and lack of high directivity of antennas in this frequency regime make observation of a single path across the plasma into a viewing dump impractical. Pickup antennas effectively see the whole plasma and wall reflection effects are important. We have modified our 2D full wave ICRH code2 to calculate wave fields due to a distribution of energetic ions in tokamak geometry. The radiation is modeled as due to an ensemble of localized source currents distributed in space. The spatial structure of the coherent wave field is then calculated including cyclotron harmonic damping as compared to the usual procedure of incoherently summing powers of individual radiators. This method has the advantage that phase information from localized radiating currents is globally retained so the directivity of the pickup antennas is correctly represented. Also standing waves and wall reflections are automatically included.

  16. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  17. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  18. On Limits of Embedding in 3D Images Based on 2D Watson's Model

    NASA Astrophysics Data System (ADS)

    Kavehvash, Zahra; Ghaemmaghami, Shahrokh

    We extend the Watson image quality metric to 3D images through the concept of integral imaging. In the Watson's model, perceptual thresholds for changes to the DCT coefficients of a 2D image are given for information hiding. These thresholds are estimated in a way that the resulting distortion in the 2D image remains undetectable by the human eyes. In this paper, the same perceptual thresholds are estimated for a 3D scene in the integral imaging method. These thresholds are obtained based on the Watson's model using the relation between 2D elemental images and resulting 3D image. The proposed model is evaluated through subjective tests in a typical image steganography scheme.

  19. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  20. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  1. A 2-D imaging heat-flux gauge

    SciTech Connect

    Noel, B.W.; Borella, H.M. ); Beshears, D.L.; Sartory, W.K.; Tobin, K.W.; Williams, R.K. ); Turley, W.D. . Santa Barbara Operations)

    1991-07-01

    This report describes a new leadless two-dimensional imaging optical heat-flux gauge. The gauge is made by depositing arrays of thermorgraphic-phosphor (TP) spots onto the faces of a polymethylpentene is insulator. In the first section of the report, we describe several gauge configurations and their prototype realizations. A satisfactory configuration is an array of right triangles on each face that overlay to form squares when the gauge is viewed normal to the surface. The next section of the report treats the thermal conductivity of TPs. We set up an experiment using a comparative longitudinal heat-flow apparatus to measure the previously unknown thermal conductivity of these materials. The thermal conductivity of one TP, Y{sub 2}O{sub 3}:Eu, is 0.0137 W/cm{center dot}K over the temperature range from about 300 to 360 K. The theories underlying the time response of TP gauges and the imaging characteristics are discussed in the next section. Then we discuss several laboratory experiments to (1) demonstrate that the TP heat-flux gauge can be used in imaging applications; (2) obtain a quantum yield that enumerates what typical optical output signal amplitudes can be obtained from TP heat-flux gauges; and (3) determine whether LANL-designed intensified video cameras have sufficient sensitivity to acquire images from the heat-flux gauges. We obtained positive results from all the measurements. Throughout the text, we note limitations, areas where improvements are needed, and where further research is necessary. 12 refs., 25 figs., 4 tabs.

  2. 3-D Deep Penetration Photoacoustic Imaging with a 2-D CMUT Array.

    PubMed

    Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O; Zhuang, Xuefeng; Gambhir, Sanjiv S; Jeffrey, R Brooke; Khuri-Yakub, Butrus T

    2010-10-11

    In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging.

  3. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  4. 3D/2D image registration: the impact of X-ray views and their number.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2007-01-01

    An important part of image-guided radiation therapy or surgery is registration of a three-dimensional (3D) preoperative image to two-dimensional (2D) images of the patient. It is expected that the accuracy and robustness of a 3D/2D image registration method do not depend solely on the registration method itself but also on the number and projections (views) of intraoperative images. In this study, we systematically investigate these factors by using registered image data, comprising of CT and X-ray images of a cadaveric lumbar spine phantom and the recently proposed 3D/2D registration method. The results indicate that the proportion of successful registrations (robustness) significantly increases when more X-ray images are used for registration.

  5. NEPHTIS: Core depletion validation relying on 2D transport core calculations with the APOLLO2 code

    SciTech Connect

    Damian, F.; Raepsaet, X.; Groizard, M.; Poinot, C.

    2006-07-01

    The CEA, in collaboration with EDF and AREVA-NP, is developing a core modelling tool called NEPHTIS, for Neutronic Process for HTGR Innovating Systems and dedicated at present day to the prismatic block-type HTGR (High Temperature Gas-Cooled Reactors). Due to the lack of usable HTGR experimental results, the confidence in this neutronic computational tool relies essentially on comparisons to reference or best-estimate calculations. In the present analysis, the Aleppo deterministic transport code has been selected as reference for validating core depletion simulations carried out within NEPHTIS. These reference calculations were performed on fully detailed 2D core configurations using the Method of Characteristics. The latter has been validated versus Monte Carlo method for different static core configurations [1], [2] and [3]. All the presented results come from an annular HTGR core loaded with uranium-based fuel (15% enrichment). During the core depletion validation, reactivity, reaction rates distributions and nuclei concentrations have been compared. In addition, the impact of various physical and geometrical parameters such as the core loading (one-through or batch-wise reloading) and the amount of burnable poison has been investigated during the validation phases. The results confirm that NEPHTIS is able to predict the core reactivity with uncertainties of {+-}350 pcm. At the end of the core irradiation, the U-235 consumption is calculated within {+-} 0, 7 % while the plutonium mass discharged from the core is calculated within {+-}1 %. As far as the core power distributions are concerned, small discrepancies ( and < 2.3 %) can be observed on the fuel block-averaged power distribution in the core. (authors)

  6. Aircraft target identification based on 2D ISAR images using multiresolution analysis wavelet

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Xiao, Huaitie; Hu, Xiangjiang

    2001-09-01

    The formation of 2D ISAR images for radar target identification hold much promise for additional distinguish- ability between targets. Since an image contains important information is a wide range of scales, and this information is often independent from one scale to another, wavelet analysis provides a method of identifying the spatial frequency content of an image and the local regions within the image where those spatial frequencies exist. In this paper, a multiresolution analysis wavelet method based on 2D ISAR images was proposed for use in aircraft radar target identification under the wide band high range resolution radar background. The proposed method was performed in three steps; first, radar backscatter signals were processed in the form of 2D ISAR images, then, Mallat's wavelet algorithm was used in the decomposition of images, finally, a three layer perceptron neural net was used as classifier. The result of experiments demonstrated that the feasibility of using multiresolution analysis wavelet for target identification.

  7. Analytic Grad-Shafranov test criteria and checks of a 1-1/2-D BALDUR code

    SciTech Connect

    Seidl, F.G.P.

    1986-05-01

    As discussed by Shafranov, Solov'ev, and others, two special constraints allow the Grad-Shafranov equation to yield simple analytic solutions. From the simplest solution, formulae are derived for properties of the corresponding toroidally symmetric plasma and for the space profile of poloidal magnetic flux density. These formulae constitute test criteria for code performance once the code is made consistent with the two constraints. Obtaining consistency with the first constraint is straightforward, but with the second it is circumstantial. Moreover, the poloidal flux profile of the analytic solution implies a certain artificial form for the resistivity, which is also derived. These criteria have been used to check a composite code which had been assembled by linking a geometrically generalized 1-D BALDUR transport code with a computationally efficient 2-D equilibrium code. A brief description of the composite code is given as well as of its performance with respect to the Grad-Shafranov test criteria.

  8. CHEM2D: a two-dimensional, three-phase, nine-component chemical flood simulator. Volume I. CHEM2D technical description and FORTRAN code

    SciTech Connect

    Fanchi, J.R.

    1985-04-01

    Under the sponsorship of the US Department of Energy, a publicly available chemical simulator has been evaluated and substantially enhanced to serve as a useful tool for projecting polymer or chemical flood performance. The program, CHEM2D, is a two-dimensional, three-phase, nine-component finite-difference numerical simulator. It can model primary depletion, waterfloods, polymer floods, and micellar/polymer floods using heterogeneous linear, areal, or cross-sectional reservoir descriptions. The user may specify well performance as either pressure or rate constrained. Both a constant time step size and a variable time step size based on extrapolation of concentration changes are available as options. A solution technique which is implicit in pressure and explicit in saturations and concentrations is used. The major physical mechanisms that are modeled include adsorption, capillary trapping, cation exchange, dilution, dispersion, interfacial tension, binary or ternary phase behavior, non-Newtonian polymer rheology, and two-phase or three-phase relative permeability. Typical components include water, oil, surfactant, polymer, and three ions (chloride, calcium, and sodium). Components may partition amongst the aqueous, oleic, and microemulsion phases. Volume I of this report provides a discussion of the formulation and algorithms used within CHEM2D. Included in Volume I are a number of validation and illustrative examples, as well as the FORTRAN code. The CHEM2D user's manual, Volume II, contains both the input data sets for the examples presented in Volume I and an example output. All appendices and a phase behavior calculation program are collected in Volume III. 20 references.

  9. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  10. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.

    PubMed

    Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  11. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  12. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  13. Accurate positioning for head and neck cancer patients using 2D and 3D image guidance

    PubMed Central

    Kang, Hyejoo; Lovelock, Dale M.; Yorke, Ellen D.; Kriminiski, Sergey; Lee, Nancy; Amols, Howard I.

    2011-01-01

    Our goal is to determine an optimized image-guided setup by comparing setup errors determined by two-dimensional (2D) and three-dimensional (3D) image guidance for head and neck cancer (HNC) patients immobilized by customized thermoplastic masks. Nine patients received weekly imaging sessions, for a total of 54, throughout treatment. Patients were first set up by matching lasers to surface marks (initial) and then translationally corrected using manual registration of orthogonal kilovoltage (kV) radiographs with DRRs (2D-2D) on bony anatomy. A kV cone beam CT (kVCBCT) was acquired and manually registered to the simulation CT using only translations (3D-3D) on the same bony anatomy to determine further translational corrections. After treatment, a second set of kVCBCT was acquired to assess intrafractional motion. Averaged over all sessions, 2D-2D registration led to translational corrections from initial setup of 3.5 ± 2.2 (range 0–8) mm. The addition of 3D-3D registration resulted in only small incremental adjustment (0.8 ± 1.5 mm). We retrospectively calculated patient setup rotation errors using an automatic rigid-body algorithm with 6 degrees of freedom (DoF) on regions of interest (ROI) of in-field bony anatomy (mainly the C2 vertebral body). Small rotations were determined for most of the imaging sessions; however, occasionally rotations > 3° were observed. The calculated intrafractional motion with automatic registration was < 3.5 mm for eight patients, and < 2° for all patients. We conclude that daily manual 2D-2D registration on radiographs reduces positioning errors for mask-immobilized HNC patients in most cases, and is easily implemented. 3D-3D registration adds little improvement over 2D-2D registration without correcting rotational errors. We also conclude that thermoplastic masks are effective for patient immobilization. PMID:21330971

  14. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  15. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.

    PubMed

    Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William

    2016-07-01

    Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.

  16. Image coding with geometric wavelets.

    PubMed

    Alani, Dror; Averbuch, Amir; Dekel, Shai

    2007-01-01

    This paper describes a new and efficient method for low bit-rate image coding which is based on recent development in the theory of multivariate nonlinear piecewise polynomial approximation. It combines a binary space partition scheme with geometric wavelet (GW) tree approximation so as to efficiently capture curve singularities and provide a sparse representation of the image. The GW method successfully competes with state-of-the-art wavelet methods such as the EZW, SPIHT, and EBCOT algorithms. We report a gain of about 0.4 dB over the SPIHT and EBCOT algorithms at the bit-rate 0.0625 bits-per-pixels (bpp). It also outperforms other recent methods that are based on "sparse geometric representation." For example, we report a gain of 0.27 dB over the Bandelets algorithm at 0.1 bpp. Although the algorithm is computationally intensive, its time complexity can be significantely reduced by collecting a "global" GW n-term approximation to the image from a collection of GW trees, each constructed separately over tiles of the image.

  17. Reproducing 2D breast mammography images with 3D printed phantoms

    NASA Astrophysics Data System (ADS)

    Clark, Matthew; Ghammraoui, Bahaa; Badal, Andreu

    2016-03-01

    Mammography is currently the standard imaging modality used to screen women for breast abnormalities and, as a result, it is a tool of great importance for the early detection of breast cancer. Physical phantoms are commonly used as surrogates of breast tissue to evaluate some aspects of the performance of mammography systems. However, most phantoms do not reproduce the anatomic heterogeneity of real breasts. New fabrication technologies, such as 3D printing, have created the opportunity to build more complex, anatomically realistic breast phantoms that could potentially assist in the evaluation of mammography systems. The primary objective of this work is to present a simple, easily reproducible methodology to design and print 3D objects that replicate the attenuation profile observed in real 2D mammograms. The secondary objective is to evaluate the capabilities and limitations of the competing 3D printing technologies, and characterize the x-ray properties of the different materials they use. Printable phantoms can be created using the open-source code introduced in this work, which processes a raw mammography image to estimate the amount of x-ray attenuation at each pixel, and outputs a triangle mesh object that encodes the observed attenuation map. The conversion from the observed pixel gray value to a column of printed material with equivalent attenuation requires certain assumptions and knowledge of multiple imaging system parameters, such as x-ray energy spectrum, source-to-object distance, compressed breast thickness, and average breast material attenuation. A detailed description of the new software, a characterization of the printed materials using x-ray spectroscopy, and an evaluation of the realism of the sample printed phantoms are presented.

  18. Comparison between 1D and 1 1/2D Eulerian Vlasov codes for the numerical simulation of stimulated Raman scattering

    NASA Astrophysics Data System (ADS)

    Ghizzo, A.; Bertrand, P.; Lebas, J.; Shoucri, M.; Johnston, T.; Fijalkow, E.; Feix, M. R.

    1992-10-01

    The present 1 1/2D relativistic Euler-Vlasov code has been used to check the validity of a hydrodynamic description used in a 1D version of the Vlasov code. By these means, detailed numerical results can be compared; good agreement furnishes full support for the 1D electromagnetic Vlasov code, which runs faster than the 1 1/2D code. The results obtained assume a nonrelativistic v(y) velocity.

  19. A Block-matching based technique for the analysis of 2D gel images.

    PubMed

    Freire, Ana; Seoane, José A; Rodríguez, Alvaro; Ruiz-Romero, Cristina; López-Campos, Guillermo; Dorado, Julián

    2010-01-01

    Research at protein level is a useful practice in personalized medicine. More specifically, 2D gel images obtained after electrophoresis process can lead to an accurate diagnosis. Several computational approaches try to help the clinicians to establish the correspondence between pairs of proteins of multiple 2D gel images. Most of them perform the alignment of a patient image referred to a reference image. In this work, an approach based on block-matching techniques is developed. Its main characteristic is that it does not need to perform the whole alignment between two images considering each protein separately. A comparison with other published methods is presented. It can be concluded that this method works over broad range of proteomic images, although they have a high level of difficulty.

  20. Document image retrieval through word shape coding.

    PubMed

    Lu, Shijian; Li, Linlin; Tan, Chew Lim

    2008-11-01

    This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.

  1. Numerical simulations of hydrodynamic instabilities: Perturbation codes PANSY, PERLE, and 2D code CHIC applied to a realistic LIL target

    NASA Astrophysics Data System (ADS)

    Hallo, L.; Olazabal-Loumé, M.; Maire, P. H.; Breil, J.; Morse, R.-L.; Schurtz, G.

    2006-06-01

    This paper deals with ablation front instabilities simulations in the context of direct drive ICF. A simplified DT target, representative of realistic target on LIL is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our Bi-dimensional hydrodynamic code Chic. Numerical solutions are shown to converge, in good agreement with analytical models.

  2. Using artificial neural networks to invert 2D DC resistivity imaging data for high resistivity contrast regions: A MATLAB application

    NASA Astrophysics Data System (ADS)

    Neyamadpour, Ahmad; Taib, Samsudin; Wan Abdullah, W. A. T.

    2009-11-01

    MATLAB is a high-level matrix/array language with control flow statements and functions. MATLAB has several useful toolboxes to solve complex problems in various fields of science, such as geophysics. In geophysics, the inversion of 2D DC resistivity imaging data is complex due to its non-linearity, especially for high resistivity contrast regions. In this paper, we investigate the applicability of MATLAB to design, train and test a newly developed artificial neural network in inverting 2D DC resistivity imaging data. We used resilient propagation to train the network. The model used to produce synthetic data is a homogeneous medium of 100 Ω m resistivity with an embedded anomalous body of 1000 Ω m. The location of the anomalous body was moved to different positions within the homogeneous model mesh elements. The synthetic data were generated using a finite element forward modeling code by means of the RES2DMOD. The network was trained using 21 datasets and tested on another 16 synthetic datasets, as well as on real field data. In field data acquisition, the cable covers 120 m between the first and the last take-out, with a 3 m x-spacing. Three different electrode spacings were measured, which gave a dataset of 330 data points. The interpreted result shows that the trained network was able to invert 2D electrical resistivity imaging data obtained by a Wenner-Schlumberger configuration rapidly and accurately.

  3. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  4. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  5. 3D surface reconstruction of apples from 2D NIR images

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Jiang, Lu; Cheng, Xuemei; Tao, Yang

    2005-11-01

    Machine vision methods are widely used in apple defect detection and quality grading applications. Currently, 2D near-infrared (NIR) imaging of apples is often used to detect apple defects because the image intensity of defects is different from normal apple parts. However, a drawback of this method is that the apple calyx also exhibits similar image intensity to the apple defects. Since an apple calyx often appears in the NIR image, the false alarm rate is high with the 2D NIR imaging method. In this paper, a 2D NIR imaging method is extended to a 3D reconstruction so that the apple calyx can be differentiated from apple defects according to their different 3D depth information. The Lambertian model is used to evaluate the reflectance map of the apple surface, and then Pentland's Shape-From-Shading (SFS) method is applied to reconstruct the 3D surface information of the apple based on Fast Fourier Transform (FFT). Pentland's method is directly derived from human perception properties, making it close to the way human eyes recover 3D information from a 2D scene. In addition, the FFT reduces the computation time significantly. The reconstructed 3D apple surface maps are shown in the results, and different depths of apple calyx and defects are obtained correctly.

  6. Efficiency of a model human image code

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1987-01-01

    Hypothetical schemes for neural representation of visual information can be expressed as explicit image codes. Here, a code modeled on the simple cells of the primate striate cortex is explored. The Cortex transform maps a digital image into a set of subimages (layers) that are bandpass in spatial frequency and orientation. The layers are sampled so as to minimize the number of samples and still avoid aliasing. Samples are quantized in a manner that exploits the bandpass contrast-masking properties of human vision. The entropy of the samples is computed to provide a lower bound on the code size. Finally, the image is reconstructed from the code. Psychophysical methods are derived for comparing the original and reconstructed images to evaluate the sufficiency of the code. When each resolution is coded at the threshold for detection artifacts, the image-code size is about 1 bit/pixel.

  7. 2-D nonlinear IIR-filters for image processing - An exploratory analysis

    NASA Technical Reports Server (NTRS)

    Bauer, P. H.; Sartori, M.

    1991-01-01

    A new nonlinear IIR filter structure is introduced and its deterministic properties are analyzed. It is shown to be better suited for image processing applications than its linear shift-invariant counterpart. The new structure is obtained from causality inversion of a 2D quarterplane causal linear filter with respect to the two directions of propagation. It is demonstrated, that by using this design, a nonlinear 2D lowpass filter can be constructed, which is capable of effectively suppressing Gaussian or impulse noise without destroying important image information.

  8. Simulation of 2D Kinetic Effects in Plasmas using the Grid Based Continuum Code LOKI

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Tom; Brunner, Stephan

    2016-10-01

    Kinetic simulation of multi-dimensional plasma waves through direct discretization of the Vlasov equation is a useful tool to study many physical interactions and is particularly attractive for situations where minimal fluctuation levels are desired, for instance, when measuring growth rates of plasma wave instabilities. However, direct discretization of phase space can be computationally expensive, and as a result there are few examples of published results using Vlasov codes in more than a single configuration space dimension. In an effort to fill this gap we have developed the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The code is designed to reduce the cost of phase-space computation by using fully 4th order accurate conservative finite differencing, while retaining excellent parallel scalability that efficiently uses large scale computing resources. In this poster I will discuss the algorithms used in the code as well as some aspects of their parallel implementation using MPI. I will also overview simulation results of basic plasma wave instabilities relevant to laser plasma interaction, which have been obtained using the code.

  9. Nanohole-array-based device for 2D snapshot multispectral imaging.

    PubMed

    Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J L

    2013-01-01

    We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems.

  10. Reliability of astrophysical jet simulations in 2D. On inter-code reliability and numerical convergence

    NASA Astrophysics Data System (ADS)

    Krause, M.; Camenzind, M.

    2001-12-01

    In the present paper, we examine the convergence behavior and inter-code reliability of astrophysical jet simulations in axial symmetry. We consider both pure hydrodynamic jets and jets with a dynamically significant magnetic field. The setups were chosen to match the setups of two other publications, and recomputed with the MHD code NIRVANA. We show that NIRVANA and the two other codes give comparable, but not identical results. We explain the differences by the different application of artificial viscosity in the three codes and numerical details, which can be summarized in a resolution effect, in the case without magnetic field: NIRVANA turns out to be a fair code of medium efficiency. It needs approximately twice the resolution as the code by Lind (Lind et al. 1989) and half the resolution as the code by Kössl (Kössl & Müller 1988). We find that some global properties of a hydrodynamical jet simulation, like e.g. the bow shock velocity, converge at 100 points per beam radius (ppb) with NIRVANA. The situation is quite different after switching on the toroidal magnetic field: in this case, global properties converge even at 10 ppb. In both cases, details of the inner jet structure and especially the terminal shock region are still insufficiently resolved, even at our highest resolution of 70 ppb in the magnetized case and 400 ppb for the pure hydrodynamic jet. The magnetized jet even suffers from a fatal retreat of the Mach disk towards the inflow boundary, which indicates that this simulation does not converge, in the end. This is also in definite disagreement with earlier simulations, and challenges further studies of the problem with other codes. In the case of our highest resolution simulation, we can report two new features: first, small scale Kelvin-Helmholtz instabilities are excited at the contact discontinuity next to the jet head. This slows down the development of the long wavelength Kelvin-Helmholtz instability and its turbulent cascade to smaller

  11. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images.

  12. TOPAZ - a finite element heat conduction code for analyzing 2-D solids

    SciTech Connect

    Shapiro, A.B.

    1984-03-01

    TOPAZ is a two-dimensional implicit finite element computer code for heat conduction analysis. This report provides a user's manual for TOPAZ and a description of the numerical algorithms used. Sample problems with analytical solutions are presented. TOPAZ has been implemented on the CRAY and VAX computers.

  13. 2D electron temperature diagnostic using soft x-ray imaging technique

    SciTech Connect

    Nishimura, K. Sanpei, A. Tanaka, H.; Ishii, G.; Kodera, R.; Ueba, R.; Himura, H.; Masamune, S.; Ohdachi, S.; Mizuguchi, N.

    2014-03-15

    We have developed a two-dimensional (2D) electron temperature (T{sub e}) diagnostic system for thermal structure studies in a low-aspect-ratio reversed field pinch (RFP). The system consists of a soft x-ray (SXR) camera with two pin holes for two-kinds of absorber foils, combined with a high-speed camera. Two SXR images with almost the same viewing area are formed through different absorber foils on a single micro-channel plate (MCP). A 2D T{sub e} image can then be obtained by calculating the intensity ratio for each element of the images. We have succeeded in distinguishing T{sub e} image in quasi-single helicity (QSH) from that in multi-helicity (MH) RFP states, where the former is characterized by concentrated magnetic fluctuation spectrum and the latter, by broad spectrum of edge magnetic fluctuations.

  14. Model-based 3D/2D deformable registration of MR images.

    PubMed

    Marami, Bahram; Sirouspour, Shahin; Capson, David W

    2011-01-01

    A method is proposed for automatic registration of 3D preoperative magnetic resonance images of deformable tissue to a sequence of its 2D intraoperative images. The algorithm employs a dynamic continuum mechanics model of the deformation and similarity (distance) measures such as correlation ratio, mutual information or sum of squared differences for registration. The registration is solely based on information present in the 3D preoperative and 2D intraoperative images and does not require fiducial markers, feature extraction or image segmentation. Results of experiments with a biopsy training breast phantom show that the proposed method can perform well in the presence of large deformations. This is particularly useful for clinical applications such as MR-based breast biopsy where large tissue deformations occur.

  15. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  16. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  17. Real-time 2D Imaging of Thermal and Mechanical Tissue Response to Focused Ultrasound

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Ebbini, Emad S.

    2010-03-01

    An integrated system capable of performing high frame-rate two-dimensional (2D) temperature imaging in realtime is has been developed. The system consists of a SonixRP ultrasound scanner and a custom built data processing unit connected with Gigabit Ethernet (GbE). The SonixRP scanner which serves as the frontend of the integrated system allows us to have flexibilities of controlling the beam sequence and accessing the radio frequency (RF) data in realtime through its research interface. The RF data is then streamlined to the backend of the system through GbE, where the data is processed using a 2D temperature estimation algorithm running in a general purpose graphics processing unit (GPU). Using this system, we have developed a 2D high frame-rate imaging mode, M2D, for imaging the mechanical and thermal tissue response to subtherapeutic HIFU beams. In this paper, we present results from imaging subtherapetic HIFU beams in vitro porcine heart before and after lesion formation. The results demonstrate the feasibility of tissue parameter changes due to HIFU-induced lesions.

  18. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  19. Combining 2D synchrosqueezed wave packet transform with optimization for crystal image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Wirth, Benedikt; Yang, Haizhao

    2016-04-01

    We develop a variational optimization method for crystal analysis in atomic resolution images, which uses information from a 2D synchrosqueezed transform (SST) as input. The synchrosqueezed transform is applied to extract initial information from atomic crystal images: crystal defects, rotations and the gradient of elastic deformation. The deformation gradient estimate is then improved outside the identified defect region via a variational approach, to obtain more robust results agreeing better with the physical constraints. The variational model is optimized by a nonlinear projected conjugate gradient method. Both examples of images from computer simulations and imaging experiments are analyzed, with results demonstrating the effectiveness of the proposed method.

  20. Scalable still image coding based on wavelet

    NASA Astrophysics Data System (ADS)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  1. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  2. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, D.; Kwatra, S. C.

    1992-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.

  3. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    PubMed Central

    Murienne, Barbara J.; Nguyen, Thao D.

    2015-01-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions. PMID:26543296

  4. A comparison of 2D and 3D digital image correlation for a membrane under inflation.

    PubMed

    Murienne, Barbara J; Nguyen, Thao D

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  5. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    NASA Astrophysics Data System (ADS)

    Murienne, Barbara J.; Nguyen, Thao D.

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  6. Novel security enhancement technique against eavesdropper for OCDMA system using 2-D modulation format with code switching scheme

    NASA Astrophysics Data System (ADS)

    Singh, Simranjit; Kaur, Ramandeep; Singh, Amanvir; Kaler, R. S.

    2015-03-01

    In this paper, security of the spectrally encoded-optical code division multiplexed access (OCDMA) system is enhanced by using 2-D (orthogonal) modulation technique. This is an effective approach for simultaneous improvement of the system capacity and security. Also, the results show that the hybrid modulation technique proved to be a better option to enhance the data confidentiality at higher data rates using minimum utilization of bandwidth in a multiuser environment. Further, the proposed system performance is compared with the current state-of-the-art OCDMA schemes.

  7. A 2D Benchmark for the Verification of the PEBBED Code

    SciTech Connect

    Barry D. Ganapol; Hans A. Gougar; A. O. Ougouag

    2008-09-01

    A new benchmarking concept is presented for verifying the PEBBED 3D multigroup finite difference/nodal diffusion code with application to pebble bed modular reactors (PBMRs). The key idea is to perform convergence acceleration, also called extrapolation to zero discretization, of a basic finite difference numerical algorithm to give extremely high accuracy. The method is first demonstrated on a 1D cylindrical shell and then on an r,8 wedge where the order of the second order finite difference scheme is confirmed to four places.

  8. The Ultrasonic Measurement of Crystallographic Orientation for Imaging Anisotropic Components with 2d Arrays

    NASA Astrophysics Data System (ADS)

    Lane, C. J. L.; Dunhill, A. K.; Drinkwater, B. W.; Wilcox, P. D.

    2011-06-01

    Single crystal components are used widely in the gas-turbine industry. However, these components are elastically anisotropic which causes difficulties when performing NDE inspections with ultrasound. Recently an ultrasonic algorithm for a 2D array has been corrected to perform the reliable volumetric inspection of single crystals. For the algorithm to be implemented the crystallographic orientation of the components must be known. This paper, therefore, develops and reviews crystallographic orientation methods using 2D ultrasonic arrays. The methods under examination are based on the anisotropic propagation of surface and bulk waves and an image-based orientation method is also considered.

  9. 2D Doppler backscattering using synthetic aperture microwave imaging of MAST edge plasmas

    NASA Astrophysics Data System (ADS)

    Thomas, D. A.; Brunner, K. J.; Freethy, S. J.; Huang, B. K.; Shevchenko, V. F.; Vann, R. G. L.

    2016-02-01

    Doppler backscattering (DBS) is already established as a powerful diagnostic; its extension to 2D enables imaging of turbulence characteristics from an extended region of the cut-off surface. The Synthetic Aperture Microwave Imaging (SAMI) diagnostic has conducted proof-of-principle 2D DBS experiments of MAST edge plasma. SAMI actively probes the plasma edge using a wide (±40° vertical and horizontal) and tuneable (10-34.5 GHz) beam. The Doppler backscattered signal is digitised in vector form using an array of eight Vivaldi PCB antennas. This allows the receiving array to be focused in any direction within the field of view simultaneously to an angular range of 6-24° FWHM at 10-34.5 GHz. This capability is unique to SAMI and is a novel way of conducting DBS experiments. In this paper the feasibility of conducting 2D DBS experiments is explored. Initial observations of phenomena previously measured by conventional DBS experiments are presented; such as momentum injection from neutral beams and an abrupt change in power and turbulence velocity coinciding with the onset of H-mode. In addition, being able to carry out 2D DBS imaging allows a measurement of magnetic pitch angle to be made; preliminary results are presented. Capabilities gained through steering a beam using a phased array and the limitations of this technique are discussed.

  10. New applications for the touchscreen in 2D and 3D medical imaging workstations

    NASA Astrophysics Data System (ADS)

    Hinckley, Ken; Goble, John C.; Pausch, Randy; Kassell, Neal F.

    1995-04-01

    We present a new interface technique which augments a 3D user interface based on the physical manipulation of tools, or props, with a touchscreen. This hybrid interface intuitively and seamlessly combines 3D input with more traditional 2D input in the same user interface. Example 2D interface tasks of interest include selecting patient images from a database, browsing through axial, coronal, and sagittal image slices, or adjusting image center and window parameters. Note the facility with which a touchscreen can be used: the surgeon can move in 3D using the props, and then, without having to put the props down, the surgeon can reach out and touch the screen to perform 2D tasks. Based on previous work by Sears, we provide touchscreen users with visual feedback in the form of a small cursor which appears above the finger, allowing targets much smaller than the finger itself to be selected. Based on our informal user observations to date, this touchscreen stabilization algorithm allows targets as small as 1.08 mm X 1.08 mm to be selected by novices, and makes possible selection of targets as small as 0.27 mm X 0.27 mm after some training. Based on implemented prototype systems, we suggest that touchscreens offer not only intuitive 2D input which is well accepted by physicians, but that touchscreens also offer fast and accurate input which blends well with 3D interaction techniques.

  11. Multichannel Linear Predictive Coding of Color Images,

    DTIC Science & Technology

    1984-01-01

    single- An alternative may of =oeling z(n,n) wmul output AniM , as described in 11,21, at me be to autoregressively model each channel average...being minimum shoulders Image with well definte tao.r&. The phase, where*6* dt-* d .~ ,s #% terminenst of a binary image of Fig. 2(d). howver. rinws

  12. A positioning QA procedure for 2D/2D (kV/MV) and 3D/3D (CT/CBCT) image matching for radiotherapy patient setup.

    PubMed

    Guan, Huaiqun; Hammoud, Rabih; Yin, Fang-Fang

    2009-10-06

    A positioning QA procedure for Varian's 2D/2D (kV/MV) and 3D/3D (planCT/CBCT) matching was developed. The procedure was to check: (1) the coincidence of on-board imager (OBI), portal imager (PI), and cone beam CT (CBCT)'s isocenters (digital graticules) to a linac's isocenter (to a pre-specified accuracy); (2) that the positioning difference detected by 2D/2D (kV/MV) and 3D/3D(planCT/CBCT) matching can be reliably transferred to couch motion. A cube phantom with a 2 mm metal ball (bb) at the center was used. The bb was used to define the isocenter. Two additional bbs were placed on two phantom surfaces in order to define a spatial location of 1.5 cm anterior, 1.5 cm inferior, and 1.5 cm right from the isocenter. An axial scan of the phantom was acquired from a multislice CT simulator. The phantom was set at the linac's isocenter (lasers); either AP MV/R Lat kV images or CBCT images were taken for 2D/2D or 3D/3D matching, respectively. For 2D/2D, the accuracy of each device's isocenter was obtained by checking the distance between the central bb and the digital graticule. Then the central bb in orthogonal DRRs was manually moved to overlay to the off-axis bbs in kV/MV images. For 3D/3D, CBCT was first matched to planCT to check the isocenter difference between the two CTs. Manual shifts were then made by moving CBCT such that the point defined by the two off-axis bbs overlay to the central bb in planCT. (PlanCT can not be moved in the current version of OBI1.4.) The manual shifts were then applied to remotely move the couch. The room laser was used to check the accuracy of the couch movement. For Trilogy (or Ix-21) linacs, the coincidence of imager and linac's isocenter was better than 1 mm (or 1.5 mm). The couch shift accuracy was better than 2 mm.

  13. Efficient simulation of pitch angle collisions in a 2+2-D Eulerian Vlasov code

    NASA Astrophysics Data System (ADS)

    Banks, Jeff; Berger, R.; Brunner, S.; Tran, T.

    2014-10-01

    Here we discuss pitch angle scattering collisions in the context of the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space. The collision operator is discretized using 4th order accurate conservative finite-differencing. The treatment of the Vlasov operator in phase-space uses an approach based on a minimally diffuse, fourth-order-accurate discretization (Banks and Hittinger, IEEE T. Plasma Sci. 39, 2198). The overall scheme is therefore discretely conservative and controls unphysical oscillations. Some details of the numerical scheme will be presented, and the implementation on modern highly concurrent parallel computers will be discussed. We will present results of collisional effects on linear and non-linear Landau damping of electron plasma waves (EPWs). In addition we will present initial results showing the effect of collisions on the evolution of EPWs in two space dimensions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the LDRD program at LLNL under project tracking code 12-ERD-061.

  14. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  15. Fast Threshold image segmentation based on 2D Fuzzy Fisher and Random Local Optimized QPSO.

    PubMed

    Zhang, Chunming; Xie, Yongchun; Liu, Da; Wang, Li

    2016-10-26

    In the paper, a real-time segmentation method which separates the target signal from the navigation image is proposed. In the approaching docking stage, the navigation image is composed of target and nontarget signal, which are separately bright spot and space vehicle itself. Since the non-target signals is the main part of the navigation image, the traditional entropy-related criterions and Ostu-related criterions will bring inadequate segmentation, while the mere 2D Fisher criterion will causes over-segmentation, all the methods show their shortages in dealing with this kind of case. To guarantee a precise image segmentation, a revised 2D fuzzy Fisher is proposed in the paper to make a trade-off between positioning target regions and retaining target fuzzy boundaries. Firstly, to reduce redundant computations in finding the threshold pair, a 2D fuzzy Fisher criterion based integral image is established by way of simplifying the corresponding fuzzy domains. And then, to quicken the convergence, a random orthogonal component is added in its quasioptimum particle to enhance its local searching capacity in each iteration. Experimental results show its competence of quick segmentation.

  16. A wavelet relational fuzzy C-means algorithm for 2D gel image segmentation.

    PubMed

    Rashwan, Shaheera; Faheem, Mohamed Talaat; Sarhan, Amany; Youssef, Bayumy A B

    2013-01-01

    One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM) algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM) and the Wavelet Fuzzy C-Means (WFCM) to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS) demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases) the quality of the segmentation.

  17. A quantitative damage imaging technique based on enhanced CCRTM for composite plates using 2D scan

    NASA Astrophysics Data System (ADS)

    He, Jiaze; Yuan, Fuh-Gwo

    2016-10-01

    A two-dimensional (2D) non-contact areal scan system was developed to image and quantify impact damage in a composite plate using an enhanced zero-lag cross-correlation reverse-time migration (E-CCRTM) technique. The system comprises a single piezoelectric wafer mounted on the composite plate and a laser Doppler vibrometer (LDV) for scanning a region in the vicinity of the PZT to capture the scattered wavefield. The proposed damage imaging technique takes into account the amplitude, phase, geometric spreading, and all of the frequency content of the Lamb waves propagating in the plate; thus, a reflectivity coefficients of the delamination is calculated and potentially related to damage severity. Comparisons are made in terms of damage imaging quality between 2D areal scans and 1D line scans as well as between the proposed and existing imaging conditions. The experimental results show that the 2D E-CCRTM performs robustly when imaging and quantifying impact damage in large-scale composites using a single PZT actuator with a nearby areal scan using LDV.

  18. A faster method for 3D/2D medical image registration—a simulation study

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Claudius Gellrich, Niels; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter

    2003-08-01

    3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(°) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(°) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  19. A faster method for 3D/2D medical image registration--a simulation study.

    PubMed

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels Claudius; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter

    2003-08-21

    3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(degrees) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(degrees) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  20. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  1. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    SciTech Connect

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-15

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and

  2. Shadow scanning lens-free microscopy with tomographic reconstruction of 2D images

    NASA Astrophysics Data System (ADS)

    Manturov, Alexey O.; Blushtein, Eugeny A.; Morev, Vladislav S.

    2016-04-01

    Shadow Scanning Lens-free Microscopy (SSLM) is a possible method for optical imaging that can potentially achieve high spatial resolution. At present work we discuss the SSLM and analyse the resolution limit conditioned by the light scattering from the edge scanning imaging system that uses a shadow from moving knife edge or wire to collect the sets of tomographic projection data of two-dimensional objects. The results of numerical estimation of the SSLM resolution for reconstruction of 2D object image are presented. The experimental setup of SSLM with wire scanning element was developed. The developed device works in a UV band range and shows the spatial resolution about 90 nm.

  3. Development of a novel 2D color map for interactive segmentation of histological images

    PubMed Central

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H.; Wang, May D.

    2016-01-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method’s results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  4. Occluded target viewing and identification high-resolution 2D imaging laser radar

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Cecchetti, Kristen D.; Wikman, John C.; Drouin, David P.; Egbert, Paul I.

    2007-09-01

    BAE SYSTEMS has developed a high-resolution 2D imaging laser radar (LADAR) system that has proven its ability to detect and identify hard targets in occluded environments, through battlefield obscurants, and through naturally occurring image-degrading atmospheres. Limitations of passive infrared imaging for target identification using medium wavelength infrared (MWIR) and long wavelength infrared (LWIR) atmospheric windows are well known. Of particular concern is that as wavelength is increased the aperture must be increased to maintain resolution, hence, driving apertures to be very larger for long-range identification; impractical because of size, weight, and optics cost. Conversely, at smaller apertures and with large f-numbers images may become photon starved with long integration times. Here, images are most susceptible to distortion from atmospheric turbulence, platform vibration, or both. Additionally, long-range identification using passive thermal imaging is clutter limited arising from objects in close proximity to the target object.

  5. Blockiness in JPEG-coded images

    NASA Astrophysics Data System (ADS)

    Meesters, Lydia; Martens, Jean-Bernard

    1999-05-01

    In two experiments, dissimilarity data and numerical scaling data were obtained to determine the underlying attributes of image quality in baseline sequential JPEG coded imags. Although several distortions were perceived, i.e., blockiness, ringing and blur, the subjective data for all attributes where highly correlated, so that image quality could approximately be described by one independent attribute. We therefore proceeded by developing an instrumental measure for one of these distortions, i.e., blockiness. In this paper a single-ended blockiness measure is proposed, i.e., one that uses only the coded image. Our approach is therefore fundamentally different from most image quality models that use both the original and the degraded image. The measure is based on detecting the low- amplitude edges that result from blocking and estimating the amplitudes. Because of the approximate 1D of the underlying psychological space, the proposed blockiness measure also predicts the image quality of sequential baseline coded JPEG images.

  6. 2D Ultrasound and 3D MR Image Registration of the Prostate for Brachytherapy Surgical Navigation

    PubMed Central

    Zhang, Shihui; Jiang, Shan; Yang, Zhiyong; Liu, Ranlu

    2015-01-01

    Abstract Two-dimensional (2D) ultrasound (US) images are widely used in minimally invasive prostate procedure for its noninvasive nature and convenience. However, the poor quality of US image makes it difficult to be used as guiding utility. To improve the limitation, we propose a multimodality image guided navigation module that registers 2D US images with magnetic resonance imaging (MRI) based on high quality preoperative models. A 2-step spatial registration method is used to complete the procedure which combines manual alignment and rapid mutual information (MI) optimize algorithm. In addition, a 3-dimensional (3D) reconstruction model of prostate with surrounding organs is employed to combine with the registered images to conduct the navigation. Registration accuracy is measured by calculating the target registration error (TRE). The results show that the error between the US and preoperative MR images of a polyvinyl alcohol hydrogel model phantom is 1.37 ± 0.14 mm, with a similar performance being observed in patient experiments. PMID:26448009

  7. 2D Resistive Magnetohydrodynamics Calculations with an Arbitrary Lagrange Eulerian Code

    NASA Astrophysics Data System (ADS)

    Rousculp, C. L.; Gianakon, T. A.; Lipnikov, K. N.; Nelson, E. M.

    2015-11-01

    Single fluid resistive MHD is useful for modeling Z-pinch configurations in cylindrical geometry. One such example is thin walled liners for shock physics or HEDP experiments driven by capacitor banks such as the LANL's PHELIX or Sandia-Z. MHD is also useful for modeling high-explosive-driven flux compression generators (FCGs) and their high-current switches. The resistive MHD in our arbitrary Lagrange Eulerian (ALE) code operates in one and two dimensions in both Cartesian and cylindrical geometry. It is implemented as a time-step split operator, which consists of, ideal MHD connected to the explicit hydro momentum and energy equations and a second order mimetic discretization solver for implicit solution of the magnetic diffusion equation. In a staggered grid scheme, a single-component of cell-centered magnetic flux is conserved in the Lagrangian frame exactly, while magnetic forces are accumulated at the nodes. Total energy is conserved to round off. Total flux is conserved under the ALE relaxation and remap. The diffusion solver consistently computes Ohmic heating. Both Neumann and Dirichlet boundary conditions are available with coupling to external circuit models. Example calculations will be shown.

  8. The 1963 Vajont landslide (Italy) simulated through a numerical 2D code

    NASA Astrophysics Data System (ADS)

    Zaniboni, Filippo; Ausilia Paparo, Maria; Elsen, Katharina; Tinti, Stefano

    2013-04-01

    On October 9th, 1963, a huge mass of about 260 million m3 collapsed along Mt. Toc flank into the artificial lake called Vajont and generated a gigantic wave that invested the town of Longarone (North-East Italy, about 100 km north of Venice), provoking about 2000 casualties. The event started a public debate on the responsibilities for the disaster, and also raised crucial issues for the scientific and engineering community, regarding reservoir flank instability and safety of the hydroelectric plant. The peculiar features of the event were immediately evident. The clay layers remained uncovered in the upper part of the detachment niche, supporting the hypothesis of a well-defined pre-existing sliding surface, that could explain the high falling velocity (around 20 m/s as a maximum) and the compactness of the deposit layers that were found to sit almost unperturbed on the bottom of the valley. The numerical study presented here contributes to the understanding of dynamics of the Vajont landslide. It is found that the accurate knowledge of the pre- and post-slide morphology provides tight constraints on the parameters of the numerical model, that are tuned to fit the observed deposit. Numerical simulations are carried out by means of the in-house built code UBO-BLOCK2. The initial sliding body is divided into a mesh of interacting volume-conserving blocks, whose motion is computed numerically. The friction coefficient at the base of the landslide is determined through a best fit search by maximizing the degree of overlapping between the calculated and observed deposits. Our best solution is also able to account for the observed slight easterly rotation of the mass, the different behaviors of the eastern and western part of the sliding surface and the retrogressive motion of the slide that after climbing up the opposite flank of the valley reverted velocity to settle down on the bottom of the valley.

  9. Linear distance coding for image classification.

    PubMed

    Wang, Zilei; Feng, Jiashi; Yan, Shuicheng; Xi, Hongsheng

    2013-02-01

    The feature coding-pooling framework is shown to perform well in image classification tasks, because it can generate discriminative and robust image representations. The unavoidable information loss incurred by feature quantization in the coding process and the undesired dependence of pooling on the image spatial layout, however, may severely limit the classification. In this paper, we propose a linear distance coding (LDC) method to capture the discriminative information lost in traditional coding methods while simultaneously alleviating the dependence of pooling on the image spatial layout. The core of the LDC lies in transforming local features of an image into more discriminative distance vectors, where the robust image-to-class distance is employed. These distance vectors are further encoded into sparse codes to capture the salient features of the image. The LDC is theoretically and experimentally shown to be complementary to the traditional coding methods, and thus their combination can achieve higher classification accuracy. We demonstrate the effectiveness of LDC on six data sets, two of each of three types (specific object, scene, and general object), i.e., Flower 102 and PFID 61, Scene 15 and Indoor 67, Caltech 101 and Caltech 256. The results show that our method generally outperforms the traditional coding methods, and achieves or is comparable to the state-of-the-art performance on these data sets.

  10. Preliminary work of real-time ultrasound imaging system for 2-D array transducer.

    PubMed

    Li, Xu; Yang, Jiali; Ding, Mingyue; Yuchi, Ming

    2015-01-01

    Ultrasound (US) has emerged as a non-invasive imaging modality that can provide anatomical structure information in real time. To enable the experimental analysis of new 2-D array ultrasound beamforming methods, a pre-beamformed parallel raw data acquisition system was developed for 3-D data capture of 2D array transducer. The transducer interconnection adopted the row-column addressing (RCA) scheme, where the columns and rows were active in sequential for transmit and receive events, respectively. The DAQ system captured the raw data in parallel and the digitized data were fed through the field programmable gate array (FPGA) to implement the pre-beamforming. Finally, 3-D images were reconstructed through the devised platform in real-time.

  11. Image Pretreatment Tools II: Normalization Techniques for 2-DE and 2-D DIGE.

    PubMed

    Robotti, Elisa; Marengo, Emilio; Quasso, Fabio

    2016-01-01

    Gel electrophoresis is usually applied to identify different protein expression profiles in biological samples (e.g., control vs. pathological, control vs. treated). Information about the effect to be investigated (a pathology, a drug, a ripening effect, etc.) is however generally confounded with experimental variability that is quite large in 2-DE and may arise from small variations in the sample preparation, reagents, sample loading, electrophoretic conditions, staining and image acquisition. Obtaining valid quantitative estimates of protein abundances in each map, before the differential analysis, is therefore fundamental to provide robust candidate biomarkers. Normalization procedures are applied to reduce experimental noise and make the images comparable, improving the accuracy of differential analysis. Certainly, they may deeply influence the final results, and to this respect they have to be applied with care. Here, the most widespread normalization procedures are described both for what regards the applications to 2-DE and 2D Difference Gel-electrophoresis (2-D DIGE) maps.

  12. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  13. 3D/2D Model-to-Image Registration for Quantitative Dietary Assessment.

    PubMed

    Chen, Hsin-Chen; Jia, Wenyan; Li, Zhaoxin; Sun, Yung-Nien; Sun, Mingui

    2012-12-31

    Image-based dietary assessment is important for health monitoring and management because it can provide quantitative and objective information, such as food volume, nutrition type, and calorie intake. In this paper, a new framework, 3D/2D model-to-image registration, is presented for estimating food volume from a single-view 2D image containing a reference object (i.e., a circular dining plate). First, the food is segmented from the background image based on Otsu's thresholding and morphological operations. Next, the food volume is obtained from a user-selected, 3D shape model. The position, orientation and scale of the model are optimized by a model-to-image registration process. Then, the circular plate in the image is fitted and its spatial information is used as constraints for solving the registration problem. Our method takes the global contour information of the shape model into account to obtain a reliable food volume estimate. Experimental results using regularly shaped test objects and realistically shaped food models with known volumes both demonstrate the effectiveness of our method.

  14. Image compression and encryption scheme based on 2D compressive sensing and fractional Mellin transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong

    2015-05-01

    Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.

  15. Filters in 2D and 3D Cardiac SPECT Image Processing

    PubMed Central

    Ploussi, Agapi; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast. PMID:24804144

  16. Filters in 2D and 3D Cardiac SPECT Image Processing.

    PubMed

    Lyra, Maria; Ploussi, Agapi; Rouchota, Maritina; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast.

  17. Hardware support for shape decoding from 2D-region-based image representations

    NASA Astrophysics Data System (ADS)

    Privat, Gilles; Le Hin, Ivan

    1997-01-01

    Graphics systems have long been using standard libraries or APIs to insulate applications from implementation specifics. The same approach is applicable to natural image representations based on object primitives, such as proposed for MPEG4 standardization. The rendering of these image objects can be hidden behind APIs and supported either in hardware or software, depending on the level of representation they address, so that higher-level manipulation of these objects is made independent of the pixel level. We evaluate the trade-offs involved in the choice of these primitives to be used as pivotal intermediate representations. The example addressed is shape coding for image regions obtained from segmentation. Shape coding primitives based on either contour (chain codes), union of elementary patterns and alphaplane are evaluated with regard to both the possibility to support them on different architecture models and the level of functionalities they make available.

  18. An edge preserving differential image coding scheme

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1992-01-01

    Differential encoding techniques are fast and easy to implement. However, a major problem with the use of differential encoding for images is the rapid edge degradation encountered when using such systems. This makes differential encoding techniques of limited utility, especially when coding medical or scientific images, where edge preservation is of utmost importance. A simple, easy to implement differential image coding system with excellent edge preservation properties is presented. The coding system can be used over variable rate channels, which makes it especially attractive for use in the packet network environment.

  19. Code-excited linear predictive coding of multispectral MR images

    NASA Astrophysics Data System (ADS)

    Hu, Jian-Hong; Wang, Yao; Cahill, Patrick

    1996-02-01

    This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.

  20. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT

  1. JetCurry: Modeling 3D geometry of AGN jets from 2D images

    NASA Astrophysics Data System (ADS)

    Kosak, Katie; Li, KunYang; Avachat, Sayali S.; Perlman, Eric S.

    2017-02-01

    Written in Python, JetCurry models the 3D geometry of jets from 2-D images. JetCurry requires NumPy and SciPy and incorporates emcee (ascl:1303.002) and AstroPy (ascl:1304.002), and optionally uses VPython. From a defined initial part of the jet that serves as a reference point, JetCurry finds the position of highest flux within a bin of data in the image matrix and fits along the x axis for the general location of the bends in the jet. A spline fitting is used to smooth out the resulted jet stream.

  2. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  3. JetCurry: Modeling 3D geometry of AGN jets from 2D images

    NASA Astrophysics Data System (ADS)

    Li, Kunyang; Kosak, Katie; Avachat, Sayali S.; Perlman, Eric S.

    2017-02-01

    Written in Python, JetCurry models the 3D geometry of AGN jets from 2-D images. JetCurry requires NumPy and SciPy and incorporates emcee (ascl:1303.002) and AstroPy (ascl:1304.002), and optionally uses VPython. From a defined initial part of the jet that serves as a reference point, JetCurry finds the position of highest flux within a bin of data in the image matrix and fits along the x axis for the general location of the bends in the jet. A spline fitting is used to smooth out the resulted jet stream.

  4. The design of wavefront coded imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Shun; Cen, Zhaofeng; Li, Xiaotong

    2016-10-01

    Wavefront Coding is a new method to extend the depth of field, which combines optical design and signal processing together. By using optical design software ZEMAX ,we designed a practical wavefront coded imaging system based on a conventional Cooke triplet system .Unlike conventional optical system, the wavefront of this new system is modulated by a specially designed phase mask, which makes the point spread function (PSF)of optical system not sensitive to defocus. Therefore, a series of same blurred images obtained at the image plane. In addition, the optical transfer function (OTF) of the wavefront coded imaging system is independent of focus, which is nearly constant with misfocus and has no regions of zeros. All object information can be completely recovered through digital filtering at different defocus positions. The focus invariance of MTF is selected as merit function in this design. And the coefficients of phase mask are set as optimization goals. Compared to conventional optical system, wavefront coded imaging system obtains better quality images under different object distances. Some deficiencies appear in the restored images due to the influence of digital filtering algorithm, which are also analyzed in this paper. The depth of field of the designed wavefront coded imaging system is about 28 times larger than initial optical system, while keeping higher optical power and resolution at the image plane.

  5. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; Wolterink, Jelmer M.; de Jong, Pim A.; Viergever, Max A.; Išgum, Ivana

    2016-03-01

    Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs -- heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.

  6. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  7. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  8. A Programmable Liquid Collimator for Both Coded Aperture Adaptive Imaging and Multiplexed Compton Scatter Tomography

    DTIC Science & Technology

    2012-03-01

    Assessment of COMSCAN, A Compton Backscatter Imaging Camera , for the One-Sided Non-Destructive Inspection of Aerospace Compo- nents. Technical report...A PROGRAMMABLE LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Jack G. M. FitzGerald, 2d...LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Presented to the Faculty Department of

  9. Smart time-pulse coding photoconverters as basic components 2D-array logic devices for advanced neural networks and optical computers

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Michalnichenko, Nikolay N.

    2004-04-01

    The article deals with a conception of building arithmetic-logic devices (ALD) with a 2D-structure and optical 2D-array inputs-outputs as advanced high-productivity parallel basic operational training modules for realization of basic operation of continuous, neuro-fuzzy, multilevel, threshold and others logics and vector-matrix, vector-tensor procedures in neural networks, that consists in use of time-pulse coding (TPC) architecture and 2D-array smart optoelectronic pulse-width (or pulse-phase) modulators (PWM or PPM) for transformation of input pictures. The input grayscale image is transformed into a group of corresponding short optical pulses or time positions of optical two-level signal swing. We consider optoelectronic implementations of universal (quasi-universal) picture element of two-valued ALD, multi-valued ALD, analog-to-digital converters, multilevel threshold discriminators and we show that 2D-array time-pulse photoconverters are the base elements for these devices. We show simulation results of the time-pulse photoconverters as base components. Considered devices have technical parameters: input optical signals power is 200nW_200μW (if photodiode responsivity is 0.5A/W), conversion time is from tens of microseconds to a millisecond, supply voltage is 1.5_15V, consumption power is from tens of microwatts to a milliwatt, conversion nonlinearity is less than 1%. One cell consists of 2-3 photodiodes and about ten CMOS transistors. This simplicity of the cells allows to carry out their integration in arrays of 32x32, 64x64 elements and more.

  10. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  11. Electron Microscopy: From 2D to 3D Images with Special Reference to Muscle

    PubMed Central

    2015-01-01

    This is a brief and necessarily very sketchy presentation of the evolution in electron microscopy (EM) imaging that was driven by the necessity of extracting 3-D views from the essentially 2-D images produced by the electron beam. The lens design of standard transmission electron microscope has not been greatly altered since its inception. However, technical advances in specimen preparation, image collection and analysis gradually induced an astounding progression over a period of about 50 years. From the early images that redefined tissues, cell and cell organelles at the sub-micron level, to the current nano-resolution reconstructions of organelles and proteins the step is very large. The review is written by an investigator who has followed the field for many years, but often from the sidelines, and with great wonder. Her interest in muscle ultrastructure colors the writing. More specific detailed reviews are presented in this issue. PMID:26913146

  12. 2D dose distribution images of a hybrid low field MRI-γ detector

    NASA Astrophysics Data System (ADS)

    Abril, A.; Agulles-Pedrós, L.

    2016-07-01

    The proposed hybrid system is a combination of a low field MRI and dosimetric gel as a γ detector. The readout system is based on the polymerization process induced by the gel radiation. A gel dose map is obtained which represents the functional part of hybrid image alongside with the anatomical MRI one. Both images should be taken while the patient with a radiopharmaceutical is located inside the MRI system with a gel detector matrix. A relevant aspect of this proposal is that the dosimetric gel has never been used to acquire medical images. The results presented show the interaction of the 99mTc source with the dosimetric gel simulated in Geant4. The purpose was to obtain the planar γ 2D-image. The different source configurations are studied to explore the ability of the gel as radiation detector through the following parameters; resolution, shape definition and radio-pharmaceutical concentration.

  13. Compressive imaging using fast transform coding

    NASA Astrophysics Data System (ADS)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  14. Ultrasound strain imaging using Barker code

    NASA Astrophysics Data System (ADS)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  15. ZEUS-2D: A Radiation Magnetohydrodynamics Code for Astrophysical Flows in Two Space Dimensions. II. The Magnetohydrodynamic Algorithms and Tests

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.

  16. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup.

    PubMed

    Li, Guang; Yang, T Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N; Mechalakos, James

    2015-06-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2 DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2 DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6 DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2 DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2 DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2 DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and -0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and -0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and -0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  17. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup

    PubMed Central

    Li, Guang; Yang, T. Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N.; Mechalakos, James

    2015-01-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and −0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and −0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and −0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  18. 2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils

    PubMed Central

    Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.

    2014-01-01

    Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520

  19. Volumetric synthetic aperture imaging with a piezoelectric 2D row-column probe

    NASA Astrophysics Data System (ADS)

    Bouzari, Hamed; Engholm, Mathias; Christiansen, Thomas Lehrmann; Beers, Christopher; Lei, Anders; Stuart, Matthias Bo; Nikolov, Svetoslav Ivanov; Thomsen, Erik Vilain; Jensen, Jørgen Arendt

    2016-04-01

    The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addressed transducer array. Utilizing single element transmit events, a volume rate of 90 Hz down to 14 cm deep is achieved. Data are obtained using the experimental ultrasound scanner SARUS with a 70 MHz sampling frequency and beamformed using a delay-and-sum (DAS) approach. A signal-to-noise ratio of up to 32 dB is measured on the beamformed images of a tissue mimicking phantom with attenuation of 0.5 dB cm-1 MHz-1, from the surface of the probe to the penetration depth of 300λ. Measured lateral resolution as Full-Width-at-Half-Maximum (FWHM) is between 4λ and 10λ for 18% to 65% of the penetration depth from the surface of the probe. The averaged contrast is 13 dB for the same range. The imaging performance assessment results may represent a reference guide for possible applications of such an array in different medical fields.

  20. Designing of sparse 2D arrays for Lamb wave imaging using coarray concept

    NASA Astrophysics Data System (ADS)

    Ambroziński, Łukasz; Stepinski, Tadeusz; Uhl, Tadeusz

    2015-03-01

    2D ultrasonic arrays have considerable application potential in Lamb wave based SHM systems, since they enable equivocal damage imaging and even in some cases wave-mode selection. Recently, it has been shown that the 2D arrays can be used in SHM applications in a synthetic focusing (SF) mode, which is much more effective than the classical phase array mode commonly used in NDT. The SF mode assumes a single element excitation of subsequent transmitters and off-line processing the acquired data. In the simplest implementation of the technique, only single multiplexed input and output channels are required, which results in significant hardware simplification. Application of the SF mode for 2D arrays creates additional degrees of freedom during the design of the array topology, which complicates the array design process, however, it enables sparse array designs with performance similar to that of the fully populated dense arrays. In this paper we present the coarray concept to facilitate synthesis process of an array's aperture used in the multistatic synthetic focusing approach in Lamb waves-based imaging systems. In the coherent imaging, performed in the transmit/receive mode, the sum coarray is a morphological convolution of the transmit/receive sub-arrays. It can be calculated as the set of sums of the individual sub-arrays' elements locations. The coarray framework will be presented here using a an example of a star-shaped array. The approach will be discussed in terms of beampatterns of the resulting imaging systems. Both simulated and experimental results will be included.

  1. Designing of sparse 2D arrays for Lamb wave imaging using coarray concept

    SciTech Connect

    Ambroziński, Łukasz Stepinski, Tadeusz Uhl, Tadeusz

    2015-03-31

    2D ultrasonic arrays have considerable application potential in Lamb wave based SHM systems, since they enable equivocal damage imaging and even in some cases wave-mode selection. Recently, it has been shown that the 2D arrays can be used in SHM applications in a synthetic focusing (SF) mode, which is much more effective than the classical phase array mode commonly used in NDT. The SF mode assumes a single element excitation of subsequent transmitters and off-line processing the acquired data. In the simplest implementation of the technique, only single multiplexed input and output channels are required, which results in significant hardware simplification. Application of the SF mode for 2D arrays creates additional degrees of freedom during the design of the array topology, which complicates the array design process, however, it enables sparse array designs with performance similar to that of the fully populated dense arrays. In this paper we present the coarray concept to facilitate synthesis process of an array’s aperture used in the multistatic synthetic focusing approach in Lamb waves-based imaging systems. In the coherent imaging, performed in the transmit/receive mode, the sum coarray is a morphological convolution of the transmit/receive sub-arrays. It can be calculated as the set of sums of the individual sub-arrays’ elements locations. The coarray framework will be presented here using a an example of a star-shaped array. The approach will be discussed in terms of beampatterns of the resulting imaging systems. Both simulated and experimental results will be included.

  2. 3D Materials image segmentation by 2D propagation: a graph-cut approach considering homomorphism.

    PubMed

    Waggoner, Jarrell; Zhou, Youjie; Simmons, Jeff; De Graef, Marc; Wang, Song

    2013-12-01

    Segmentation propagation, similar to tracking, is the problem of transferring a segmentation of an image to a neighboring image in a sequence. This problem is of particular importance to materials science, where the accurate segmentation of a series of 2D serial-sectioned images of multiple, contiguous 3D structures has important applications. Such structures may have distinct shape, appearance, and topology, which can be considered to improve segmentation accuracy. For example, some materials images may have structures with a specific shape or appearance in each serial section slice, which only changes minimally from slice to slice, and some materials may exhibit specific inter-structure topology that constrains their neighboring relations. Some of these properties have been individually incorporated to segment specific materials images in prior work. In this paper, we develop a propagation framework for materials image segmentation where each propagation is formulated as an optimal labeling problem that can be efficiently solved using the graph-cut algorithm. Our framework makes three key contributions: 1) a homomorphic propagation approach, which considers the consistency of region adjacency in the propagation; 2) incorporation of shape and appearance consistency in the propagation; and 3) a local non-homomorphism strategy to handle newly appearing and disappearing substructures during this propagation. To show the effectiveness of our framework, we conduct experiments on various 3D materials images, and compare the performance against several existing image segmentation methods.

  3. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  4. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  5. Efficient simulation of 2+2-D multi-species plasmas waves using an Eulerian Vlasov code

    NASA Astrophysics Data System (ADS)

    Banks, Jeffrey; Berger, Richard; Chapman, Thomas; Hittinger, Jeffrey; Bruner, Stephan

    2013-10-01

    We discuss multi-species aspects of the Eulerian-based kinetic code LOKI that evolves the Vlasov-Poisson system in 2+2-dimensional phase space (Banks et al., Phys. Plasmas 18, 052102 (2011)). In order to control the inherent cost associated with phase-space simulation, our approach uses a minimally diffuse, fourth-order-accurate finite-volume discretization (Banks and Hittinger, IEEE T. Plasma Sci. 39, 2198-2207). The scheme is discretely conservative and controls unphysical oscillations. The details of the numerical scheme will be presented, and the implementation on modern highly concurrent parallel computers will be discussed. We will present results of 2D simulations of propagating ion acoustic waves (IAWs) created using an external driving potential. The evolution of the plasma wave field and associated self-consistent distribution of trapped electrons and ions is studied after the external drive is turned off. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and funded by the Laboratory Research and Development Program at LLNL under project tracking code 12-ERD-061.

  6. Verification and benchmarking of MAGNUM-2D: a finite element computer code for flow and heat transfer in fractured porous media

    SciTech Connect

    Eyler, L.L.; Budden, M.J.

    1985-03-01

    The objective of this work is to assess prediction capabilities and features of the MAGNUM-2D computer code in relation to its intended use in the Basalt Waste Isolation Project (BWIP). This objective is accomplished through a code verification and benchmarking task. Results are documented which support correctness of prediction capabilities in areas of intended model application. 10 references, 43 figures, 11 tables.

  7. Evaluation of the channelized Hotelling observer for signal detection in 2D tomographic imaging

    NASA Astrophysics Data System (ADS)

    LaRoque, Samuel J.; Sidky, Emil Y.; Edwards, Darrin C.; Pan, Xiaochuan

    2007-03-01

    Signal detection by the channelized Hotelling (ch-Hotelling) observer is studied for tomographic application by employing a small, tractable 2D model of a computed tomography (CT) system. The primary goal of this manuscript is to develop a practical method for evaluating the ch-Hotelling observer that can generalize to larger 3D cone-beam CT systems. The use of the ch-Hotelling observer for evaluating tomographic image reconstruction algorithms is also demonstrated. For a realistic model for CT, the ch-Hotelling observer can be a good approximation to the ideal observer. The ch-Hotelling observer is applied to both the projection data and the reconstructed images. The difference in signal-to-noise ratio for signal detection in both of these domains provides a metric for evaluating the image reconstruction algorithm.

  8. A preliminary evaluation work on a 3D ultrasound imaging system for 2D array transducer

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaoli; Li, Xu; Yang, Jiali; Li, Chunyu; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a preliminary evaluation work on a pre-designed 3-D ultrasound imaging system. The system mainly consists of four parts, a 7.5MHz, 24×24 2-D array transducer, the transmit/receive circuit, power supply, data acquisition and real-time imaging module. The row-column addressing scheme is adopted for the transducer fabrication, which greatly reduces the number of active channels . The element area of the transducer is 4.6mm by 4.6mm. Four kinds of tests were carried out to evaluate the imaging performance, including the penetration depth range, axial and lateral resolution, positioning accuracy and 3-D imaging frame rate. Several strong reflection metal objects , fixed in a water tank, were selected for the purpose of imaging due to a low signal-to-noise ratio of the transducer. The distance between the transducer and the tested objects , the thickness of aluminum, and the seam width of the aluminum sheet were measured by a calibrated micrometer to evaluate the penetration depth, the axial and lateral resolution, respectively. The experiment al results showed that the imaging penetration depth range was from 1.0cm to 6.2cm, the axial and lateral resolution were 0.32mm and 1.37mm respectively, the imaging speed was up to 27 frames per second and the positioning accuracy was 9.2%.

  9. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  10. Development of ultra-fast 2D ion Doppler tomography using image intensified CMOS fast camera

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; Kuwahata, Akihiro; Yamanaka, Haruki; Inomoto, Michiaki; Ono, Yasushi; TS-group Team

    2015-11-01

    The world fastest novel time-resolved 2D ion Doppler tomography diagnostics has been developed using fast camera with high-speed gated image intensifier (frame rate: 200kfps. phosphor decay time: ~ 1 μ s). Time evolution of line-integrated spectra are diffracted from a f=1m, F/8.3 and g=2400L/mm Czerny-Turner polychromator, whose output is intensified and recorded to a high-speed camera with spectral resolution of ~0.005nm/pixel. The system can accommodate up to 36 (9 ×4) spatial points recorded at 5 μs time resolution, tomographic reconstruction is applied for the line-integrated spectra, time-resolved (5 μs/frame) local 2D ion temperature measurement has been achieved without any assumption of shot repeatability. Ion heating during intermittent reconnection event which tends to happen during high guide field merging tokamak was measured around diffusion region in UTST. The measured 2D profile shows ion heating inside the acceleration channel of reconnection outflow jet, stagnation point and downstream region where reconnected field forms thick closed flux surface as in MAST. Achieved maximum ion temperature increases as a function of Brec2 and shows good fit with MAST experiment, demonstrating promising CS-less startup scenario for spherical tokamak. This work is supported by JSPS KAKENHI Grant Number 15H05750 and 15K20921.

  11. Multi-shot compressed coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua

    2013-09-01

    The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.

  12. Adaptive directional lifting-based wavelet transform for image coding.

    PubMed

    Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang

    2007-02-01

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.

  13. Wideband 2-D Array Design Optimization With Fabrication Constraints for 3-D US Imaging.

    PubMed

    Roux, Emmanuel; Ramalli, Alessandro; Liebgott, Herve; Cachard, Christian; Robini, Marc C; Tortoli, Piero

    2017-01-01

    Ultrasound (US) 2-D arrays are of increasing interest due to their electronic steering capability to investigate 3-D regions without requiring any probe movement. These arrays are typically populated by thousands of elements that, ideally, should be individually driven by the companion scanner. Since this is not convenient, the so-called microbeamforming methods, yielding a prebeamforming stage performed in the probe handle by suitable custom integrated circuits, have so far been implemented in a few commercial high-end scanners. A possible approach to implement relatively cheap and efficient 3-D US imaging systems is using 2-D sparse arrays in which a limited number of elements can be coupled to an equal number of independent transmit/receive channels. In order to obtain US beams with adequate characteristics all over the investigated volume, the layout of such arrays must be carefully designed. This paper provides guidelines to design, by using simulated annealing optimization, 2-D sparse arrays capable of fitting specific applications or fabrication/implementation constraints. In particular, an original energy function based on multidepth 3-D analysis of the beam pattern is also exploited. A tutorial example is given, addressed to find the N e elements that should be activated in a 2-D fully populated array to yield efficient acoustic radiating performance over the entire volume. The proposed method is applied to a 32 ×32 array centered at 3 MHz to select the 128, 192, and 256 elements that provide the best acoustic performance. It is shown that the 256-element optimized array yields sidelobe levels even lower (by 5.7 dB) than that of the reference 716-element circular and (by 10.3 dB) than that of the reference 1024-element array.

  14. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  15. Diesel combustion and emissions formation using multiple 2-D imaging diagnostics

    SciTech Connect

    Dec, J.E.

    1997-12-31

    Understanding how emissions are formed during diesel combustion is central to developing new engines that can comply with increasingly stringent emission standards while maintaining or improving performance levels. Laser-based planar imaging diagnostics are uniquely capable of providing the temporally and spatially resolved information required for this understanding. Using an optically accessible research engine, a variety of two-dimensional (2-D) imaging diagnostics have been applied to investigators of direct-injection (DI) diesel combustion and emissions formation. These optical measurements have included the following laser-sheet imaging data: Mie scattering to determine liquid-phase fuel distributions, Rayleigh scattering for quantitative vapor-phase-fuel/air mixture images, laser induced incandescence (LII) for relative soot concentrations, simultaneous LII and Rayleigh scattering for relative soot particle-size distributions, planar laser-induced fluorescence (PLIF) to obtain early PAH (polyaromatic hydrocarbon) distributions, PLIF images of the OH radical that show the diffusion flame structure, and PLIF images of the NO radical showing the onset of NO{sub x} production. In addition, natural-emission chemiluminescence images were obtained to investigate autoignition. The experimental setup is described, and the image data showing the most relevant results are presented. Then the conceptual model of diesel combustion is summarized in a series of idealized schematics depicting the temporal and spatial evolution of a reacting diesel fuel jet during the time period investigated. Finally, recent PLIF images of the NO distribution are presented and shown to support the timing and location of NO formation hypothesized from the conceptual model.

  16. Beamforming of Ultrasound Signals from 1-D and 2-D Arrays under Challenging Imaging Conditions

    NASA Astrophysics Data System (ADS)

    Jakovljevic, Marko

    Beamforming of ultrasound signals in the presence of clutter, or partial aperture blockage by an acoustic obstacle can lead to reduced visibility of the structures of interest and diminished diagnostic value of the resulting image. We propose new beamforming methods to recover the quality of ultrasound images under such challenging conditions. Of special interest are the signals from large apertures, which are more susceptible to partial blockage, and from commercial matrix arrays that suffer from low sensitivity due to inherent design/hardware limitations. A coherence-based beamforming method designed for suppressing the in vivo clutter, namely Short-lag Spatial Coherence (SLSC) Imaging, is first implemented on a 1-D array to enhance visualization of liver vasculature in 17 human subjects. The SLSC images show statistically significant improvements in vessel contrast and contrast-to-noise ratio over the matched B-mode images. The concept of SLSC imaging is then extended to matrix arrays, and the first in vivo demonstration of volumetric SLSC imaging on a clinical ultrasound system is presented. The effective suppression of clutter via volumetric SLSC imaging indicates it could potentially compensate for the low sensitivity associated with most commercial matrix arrays. The rest of the dissertation assesses image degradation due to elements blocked by ribs in a transthoracic scan. A method to detect the blocked elements is demonstrated using simulated, ex vivo, and in vivo data from the fully-sampled 2-D apertures. The results show that turning off the blocked elements both reduces the near-field clutter and improves visibility of anechoic/hypoechoic targets. Most importantly, the ex vivo data from large synthetic apertures indicates that the adaptive weighing of the non-blocked elements can recover the loss of focus quality due to periodic rib structure, allowing large apertures to realize their full resolution potential in transthoracic ultrasound.

  17. Encrypting 2D/3D image using improved lensless integral imaging in Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Wang, Qiong-Hua; Kim, Seok-Tae; Lee, In-Kwon

    2016-12-01

    We propose a new image encryption technique, for the first time to our knowledge, combined Fresnel transform with the improved lensless integral imaging technique. In this work, before image encryption, the input image is first recorded into an elemental image array (EIA) by using the improved lensless integral imaging technique. The recorded EIA is encrypted into random noise by use of two phase masks located in the Fresnel domain. The positions of phase masks and operation wavelength, as well as the integral imaging system parameters are used as encryption keys that can ensure security. Compared with previous works, the main novelty of this proposed method resides in the fact that the elemental images possess distributed memory characteristic, which greatly improved the robustness of the image encryption algorithm. Meanwhile, the proposed pixel averaging algorithm can effectively address the overlapping problem existing in the computational integral imaging reconstruction process. Numerical simulations are presented to demonstrate the feasibility and effectiveness of the proposed method. Results also indicate the high robustness against data loss attacks.

  18. Predictive depth coding of wavelet transformed images

    NASA Astrophysics Data System (ADS)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  19. Design of the 2D electron cyclotron emission imaging instrument for the J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Pan, X. M.; Yang, Z. J.; Ma, X. D.; Zhu, Y. L.; Luhmann, N. C.; Domier, C. W.; Ruan, B. W.; Zhuang, G.

    2016-11-01

    A new 2D Electron Cyclotron Emission Imaging (ECEI) diagnostic is being developed for the J-TEXT tokamak. It will provide the 2D electron temperature information with high spatial, temporal, and temperature resolution. The new ECEI instrument is being designed to support fundamental physics investigations on J-TEXT including MHD, disruption prediction, and energy transport. The diagnostic contains two dual dipole antenna arrays corresponding to F band (90-140 GHz) and W band (75-110 GHz), respectively, and comprises a total of 256 channels. The system can observe the same magnetic surface at both the high field side and low field side simultaneously. An advanced optical system has been designed which permits the two arrays to focus on a wide continuous region or two radially separate regions with high imaging spatial resolution. It also incorporates excellent field curvature correction with field curvature adjustment lenses. An overview of the diagnostic and the technical progress including the new remote control technique are presented.

  20. 2D Feature Recognition And 3d Reconstruction In Solar Euv Images

    NASA Astrophysics Data System (ADS)

    Aschwanden, Markus J.

    2005-05-01

    EUV images show the solar corona in a typical temperature range of T >rsim 1 MK, which encompasses the most common coronal structures: loops, filaments, and other magnetic structures in active regions, the quiet Sun, and coronal holes. Quantitative analysis increasingly demands automated 2D feature recognition and 3D reconstruction, in order to localize, track, and monitor the evolution of such coronal structures. We discuss numerical tools that “fingerprint” curvi-linear 1D features (e.g., loops and filaments). We discuss existing finger-printing algorithms, such as the brightness-gradient method, the oriented-connectivity method, stereoscopic methods, time-differencing, and space time feature recognition. We discuss improved 2D feature recognition and 3D reconstruction techniques that make use of additional a priori constraints, using guidance from magnetic field extrapolations, curvature radii constraints, and acceleration and velocity constraints in time-dependent image sequences. Applications of these algorithms aid the analysis of SOHO/EIT, TRACE, and STEREO/SECCHI data, such as disentangling, 3D reconstruction, and hydrodynamic modeling of coronal loops, postflare loops, filaments, prominences, and 3D reconstruction of the coronal magnetic field in general.

  1. Groundwater Exploration Using 2-D Resistivity Imaging Technique in Marang, Terengganu, Malaysia

    NASA Astrophysics Data System (ADS)

    Kadri, Muhammad; Nawawi, M. N. M.

    2010-07-01

    Surface water is critically important in supplying water to streams and wetlands, and in providing water for irrigation, manufacturing, electricity power and other uses and it is an important source of water supply especially in various regions in Malaysia and it become ever more important with an increasing population. However groundwater can be an alternative source of water to the ever increasing population. Groundwater is water located beneath the ground surface in soil pore spaces and in the fractures of lithologic water formations. This would provide alternative freshwater source. In order to determine the existence of usable groundwater for agriculture purposes in Marang Terengganu, 2-D resistivity imaging technique was utilized. Three lines were surveyed at the site. The 2-D resistivity imaging technique utilized the Pole -dipole array because of relatively good horizontal coverage but it has significantly higher signal strength. The total length of the survey lines is 400 meters. Three lines were surveyed for groundwater delineation purpose. At Marang, the survey site shows the existence of groundwater. The maximum depth of investigations for the surveys is 125 meters. In general the results show that the subsurface is made up of sand and clay (resistivity value of less 100 ohm-m) and sandstone with resistivity of more than 2000 ohm-m in all the sections. This zone can be a source of groundwater.

  2. Groundwater exploration using 2D Resistivity Imaging in Pagoh, Johor, Malaysia

    NASA Astrophysics Data System (ADS)

    Kadri, Muhammad; Nawawi, M. N. M.

    2010-12-01

    Groundwater is a very important component of water resources in nature. Since the demand of groundwater increases with population growth, it is necessary to explore groundwater more intensively. In Malaysia only less than 2% of the present water used is developed from groundwater. In order to determine the existence of usable groundwater for irrigation and drinking purposes in Pagoh, 2D resistivity imaging technique was utilized. The 2-D resistivity imaging technique utilized the Wenner—Schlumberger electrode array configuration because this array is moderately sensitive to both horizontal and vertical structures. Three lines were surveyed for groundwater delineation purpose The length for each survey lines are 400 meters. At Pagoh, the survey site shows the existence of groundwater. It is indicated by the resistivity values about 10-100 ohm-m. The maximum depth of investigation survey is 77 meters. In general the results show that the subsurface is made up of alluvium and clay and the high resistivity values of more than 1000 ohm-m near the surface is due laterite and the end of the depth can be interpreted as mixture of weathered material or bedrock.

  3. SNARK09 - a software package for reconstruction of 2D images from 1D projections.

    PubMed

    Klukowska, Joanna; Davidi, Ran; Herman, Gabor T

    2013-06-01

    The problem of reconstruction of slices and volumes from 1D and 2D projections has arisen in a large number of scientific fields (including computerized tomography, electron microscopy, X-ray microscopy, radiology, radio astronomy and holography). Many different methods (algorithms) have been suggested for its solution. In this paper we present a software package, SNARK09, for reconstruction of 2D images from their 1D projections. In the area of image reconstruction, researchers often desire to compare two or more reconstruction techniques and assess their relative merits. SNARK09 provides a uniform framework to implement algorithms and evaluate their performance. It has been designed to treat both parallel and divergent projection geometries and can either create test data (with or without noise) for use by reconstruction algorithms or use data collected by another software or a physical device. A number of frequently-used classical reconstruction algorithms are incorporated. The package provides a means for easy incorporation of new algorithms for their testing, comparison and evaluation. It comes with tools for statistical analysis of the results and ten worked examples.

  4. Design of the 2D electron cyclotron emission imaging instrument for the J-TEXT tokamak.

    PubMed

    Pan, X M; Yang, Z J; Ma, X D; Zhu, Y L; Luhmann, N C; Domier, C W; Ruan, B W; Zhuang, G

    2016-11-01

    A new 2D Electron Cyclotron Emission Imaging (ECEI) diagnostic is being developed for the J-TEXT tokamak. It will provide the 2D electron temperature information with high spatial, temporal, and temperature resolution. The new ECEI instrument is being designed to support fundamental physics investigations on J-TEXT including MHD, disruption prediction, and energy transport. The diagnostic contains two dual dipole antenna arrays corresponding to F band (90-140 GHz) and W band (75-110 GHz), respectively, and comprises a total of 256 channels. The system can observe the same magnetic surface at both the high field side and low field side simultaneously. An advanced optical system has been designed which permits the two arrays to focus on a wide continuous region or two radially separate regions with high imaging spatial resolution. It also incorporates excellent field curvature correction with field curvature adjustment lenses. An overview of the diagnostic and the technical progress including the new remote control technique are presented.

  5. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  6. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  7. Computing Challenges in Coded Mask Imaging

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  8. Image Coding Based on Address Vector Quantization.

    NASA Astrophysics Data System (ADS)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  9. TU-CD-207-08: Intrinsic Image Quality Comparison of Synthesized 2-D and FFDM Images

    SciTech Connect

    Nelson, J; Wells, J; Samei, E

    2015-06-15

    Purpose: With the combined interest of managing patient dose, maintaining or improving image quality, and maintaining or improving the diagnostic utility of mammographic data, this study aims to compare the intrinsic image quality of Hologic’s synthesized 2-D (C-View) and 2-D FFDM images in terms of resolution, contrast, and noise. Methods: This study utilized a novel 3-D printed anthropomorphic breast phantom in addition to the American College of Radiology (ACR) mammography accreditation phantom. Analysis of the 3-D anthropomorphic phantom included visual assessment of resolution and analysis of the normalized noise power spectrum. Analysis of the ACR phantom included both visual inspection and objective automated analysis using in-house software. The software incorporates image- and object-specific CNR visibility thresholds which account for image characteristics such as noise texture which affect object visualization. T- test statistical analysis was also performed on ACR phantom scores. Results: The spatial resolution of C-View images is markedly lower (at least 50% worse) than that of FFDM. And while this is generally associated with the benefit of reduced relative noise magnitude, the noise in C-View images tends to have a more mottled (predominantly low-frequency) texture. In general, for high contrast objects, C-View provides superior visualization over FFDM; however this benefit diminishes for low contrast objects and is applicable only to objects that are sufficiently larger than the spatial resolution threshold. Based on both observer and automated ACR phantom analysis, between 50–70% of C-View images failed to meet ACR minimum accreditation requirements – primarily due to insufficient (unbroken) fiber visibility. Conclusion: Compared to FFDM, C-View offers better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of C-View images in the clinical

  10. Development of a 2D Image Reconstruction and Viewing System for Histological Images from Multiple Tissue Blocks: Towards High-Resolution Whole-Organ 3D Histological Images.

    PubMed

    Hashimoto, Noriaki; Bautista, Pinky A; Haneishi, Hideaki; Snuderl, Matija; Yagi, Yukako

    2016-01-01

    High-resolution 3D histology image reconstruction of the whole brain organ starts from reconstructing the high-resolution 2D histology images of a brain slice. In this paper, we introduced a method to automatically align the histology images of thin tissue sections cut from the multiple paraffin-embedded tissue blocks of a brain slice. For this method, we employed template matching and incorporated an optimization technique to further improve the accuracy of the 2D reconstructed image. In the template matching, we used the gross image of the brain slice as a reference to the reconstructed 2D histology image of the slice, while in the optimization procedure, we utilized the Jaccard index as the metric of the reconstruction accuracy. The results of our experiment on the initial 3 different whole-brain tissue slices showed that while the method works, it is also constrained by tissue deformations introduced during the tissue processing and slicing. The size of the reconstructed high-resolution 2D histology image of a brain slice is huge, and designing an image viewer that makes particularly efficient use of the computing power of a standard computer used in our laboratories is of interest. We also present the initial implementation of our 2D image viewer system in this paper.

  11. Rotationally symmetric triangulation sensor with integrated object imaging using only one 2D detector

    NASA Astrophysics Data System (ADS)

    Eckstein, Johannes; Lei, Wang; Becker, Jonathan; Jun, Gao; Ott, Peter

    2006-04-01

    In this paper a distance measurement sensor is introduced, equipped with two integrated optical systems, the first one for rotationally symmetric triangulation and the second one for imaging the object while using only one 2D detector for both purposes. Rotationally symmetric triangulation, introduced in [1], eliminates some disadvantages of classical triangulation sensors, especially at steps or strong curvatures of the object, wherefore the measurement result depends not any longer on the angular orientation of the sensor. This is achieved by imaging the scattered light from an illuminated object point to a centered and sharp ring on a low cost area detector. The diameter of the ring is proportional to the distance of the object. The optical system consists of two off axis aspheric reflecting surfaces. This system allows for integrating a second optical system in order to capture images of the object at the same 2D detector. A mock-up was realized for the first time which consists of the reflecting optics for triangulation manufactured by diamond turning. A commercially available appropriate small lens system for imaging was mechanically integrated in the reflecting optics. Alternatively, some designs of retrofocus lens system for larger field of views were investigated. The optical designs allow overlying the image of the object and the ring for distance measurement in the same plane. In this plane a CCD detector is mounted, centered to the optical axis for both channels. A fast algorithm for the evaluation of the ring is implemented. The characteristics, i.e. the ring diameter versus object distance shows very linear behavior. For illumination of the object point for distance measurement, the beam of a red laser diode system is reflected by a wavelength bandpath filter on the axis of the optical system in. Additionally, the surface of the object is illuminated by LED's in the green spectrum. The LED's are located on the outside rim of the reflecting optics. The

  12. Modelling of ELM-averaged power exhaust on JET using the EDGE2D code with variable transport coefficients

    NASA Astrophysics Data System (ADS)

    Kirnev, G.; Fundamenski, W.; Corrigan, G.

    2007-06-01

    The scrape-off layer (SOL) of the JET tokamak has been modelled using a two-dimensional plasma/neutral code, EDGE2D/NIMBUS, with variable transport coefficients, chosen according to nine candidate theories for radial heat transport in the SOL. Comparison of the radial power width on the outer divertor plates, λq, predicted by modelling and measured experimentally in L-mode and ELM-averaged H-mode at JET is presented. Transport coefficients based on classical and neo-classical ion conduction are found to offer the best agreement with experimentally measured λq magnitude and scaling with target power, upstream density and toroidal field. These results reinforce the findings of an earlier study, based on a simplified model of the SOL (Chankin 1997 Plasma Phys. Control. Fusion 39 1059), and support the earlier estimate of the power width at the entrance of the outer divertor volume in ITER, λq ap 4 mm mapped to the outer mid-plane (Fundamenski et al 2004 Nucl. Fusion 44 20).

  13. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874

  14. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  15. On the use of steady-state signal equations for 2D TrueFISP imaging.

    PubMed

    Coolen, Bram F; Heijman, Edwin; Nicolay, Klaas; Strijkers, Gustav J

    2009-07-01

    To explain the signal behavior in 2D-TrueFISP imaging, a slice excitation profile should be considered that describes a variation of effective flip angles and magnetization phases after excitation. These parameters can be incorporated into steady-state equations to predict the final signal within a pixel. The use of steady-state equations assumes that excitation occurs instantaneously, although in reality this is a nonlinear process. In addition, often the flip angle variation within the slice excitation profile is solely considered when using steady-state equations, while TrueFISP is especially known for its sensitivity to phase variations. The purpose of this study was therefore to evaluate the precision of steady-state equations in calculating signal intensities in 2D TrueFISP imaging. To that end, steady-state slice profiles and corresponding signal intensities were calculated as function of flip angle, RF phase advance and pulse shape. More complex Bloch simulations were considered as a gold standard, which described every excitation within the sequence until steady state was reached. They were used to analyze two different methods based on steady-state equations. In addition, measurements on phantoms were done with corresponding imaging parameters. Although the Bloch simulations described the steady-state slice profile formation better than methods based on steady-state equations, the latter performed well in predicting the steady-state signal resulting from it. In certain cases the phase variation within the slice excitation profile did not even have to be taken into account.

  16. Singular value decomposition-based 2D image reconstruction for computed tomography.

    PubMed

    Liu, Rui; He, Lu; Luo, Yan; Yu, Hengyong

    2017-01-01

    Singular value decomposition (SVD)-based 2D image reconstruction methods are developed and evaluated for a broad class of inverse problems for which there are no analytical solutions. The proposed methods are fast and accurate for reconstructing images in a non-iterative fashion. The multi-resolution strategy is adopted to reduce the size of the system matrix to reconstruct large images using limited memory capacity. A modified high-contrast Shepp-Logan phantom, a low-contrast FORBILD head phantom, and a physical phantom are employed to evaluate the proposed methods with different system configurations. The results show that the SVD methods can accurately reconstruct images from standard scan and interior scan projections and that they outperform other benchmark methods. The general SVD method outperforms the other SVD methods. The truncated SVD and Tikhonov regularized SVD methods accurately reconstruct a region-of-interest (ROI) from an internal scan with a known sub-region inside the ROI. Furthermore, the SVD methods are much faster and more flexible than the benchmark algorithms, especially in the ROI reconstructions in our experiments.

  17. Vector lifting schemes for stereo image coding.

    PubMed

    Kaaniche, Mounir; Benazza-Benyahia, Amel; Pesquet-Popescu, Béatrice; Pesquet, Jean-Christophe

    2009-11-01

    Many research efforts have been devoted to the improvement of stereo image coding techniques for storage or transmission. In this paper, we are mainly interested in lossy-to-lossless coding schemes for stereo images allowing progressive reconstruction. The most commonly used approaches for stereo compression are based on disparity compensation techniques. The basic principle involved in this technique first consists of estimating the disparity map. Then, one image is considered as a reference and the other is predicted in order to generate a residual image. In this paper, we propose a novel approach, based on vector lifting schemes (VLS), which offers the advantage of generating two compact multiresolution representations of the left and the right views. We present two versions of this new scheme. A theoretical analysis of the performance of the considered VLS is also conducted. Experimental results indicate a significant improvement using the proposed structures compared with conventional methods.

  18. A 2D to 3D ultrasound image registration algorithm for robotically assisted laparoscopic radical prostatectomy

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Mehdi; Pautler, Stephen E.; McKenzie, Charles A.; Peters, Terry M.

    2011-03-01

    Robotically assisted laparoscopic radical prostatectomy (RARP) is an effective approach to resect the diseased organ, with stereoscopic views of the targeted tissue improving the dexterity of the surgeons. However, since the laparoscopic view acquires only the surface image of the tissue, the underlying distribution of the cancer within the organ is not observed, making it difficult to make informed decisions on surgical margins and sparing of neurovascular bundles. One option to address this problem is to exploit registration to integrate the laparoscopic view with images of pre-operatively acquired dynamic contrast enhanced (DCE) MRI that can demonstrate the regions of malignant tissue within the prostate. Such a view potentially allows the surgeon to visualize the location of the malignancy with respect to the surrounding neurovascular structures, permitting a tissue-sparing strategy to be formulated directly based on the observed tumour distribution. If the tumour is close to the capsule, it may be determined that the adjacent neurovascular bundle (NVB) needs to be sacrificed within the surgical margin to ensure that any erupted tumour was resected. On the other hand, if the cancer is sufficiently far from the capsule, one or both NVBs may be spared. However, in order to realize such image integration, the pre-operative image needs to be fused with the laparoscopic view of the prostate. During the initial stages of the operation, the prostate must be tracked in real time so that the pre-operative MR image remains aligned with patient coordinate system. In this study, we propose and investigate a novel 2D to 3D ultrasound image registration algorithm to track the prostate motion with an accuracy of 2.68+/-1.31mm.

  19. Coded Access Optical Sensor (CAOS) Imager

    NASA Astrophysics Data System (ADS)

    Riza, N. A.; Amin, M. J.; La Torre, J. P.

    2015-04-01

    High spatial resolution, low inter-pixel crosstalk, high signal-to-noise ratio (SNR), adequate application dependent speed, economical and energy efficient design are common goals sought after for optical image sensors. In optical microscopy, overcoming the diffraction limit in spatial resolution has been achieved using materials chemistry, optimal wavelengths, precision optics and nanomotion-mechanics for pixel-by-pixel scanning. Imagers based on pixelated imaging devices such as CCD/CMOS sensors avoid pixel-by-pixel scanning as all sensor pixels operate in parallel, but these imagers are fundamentally limited by inter-pixel crosstalk, in particular with interspersed bright and dim light zones. In this paper, we propose an agile pixel imager sensor design platform called Coded Access Optical Sensor (CAOS) that can greatly alleviate the mentioned fundamental limitations, empowering smart optical imaging for particular environments. Specifically, this novel CAOS imager engages an application dependent electronically programmable agile pixel platform using hybrid space-time-frequency coded multiple-access of the sampled optical irradiance map. We demonstrate the foundational working principles of the first experimental electronically programmable CAOS imager using hybrid time-frequency multiple access sampling of a known high contrast laser beam irradiance test map, with the CAOS instrument based on a Texas Instruments (TI) Digital Micromirror Device (DMD). This CAOS instrument provides imaging data that exhibits 77 dB electrical SNR and the measured laser beam image irradiance specifications closely match (i.e., within 0.75% error) the laser manufacturer provided beam image irradiance radius numbers. The proposed CAOS imager can be deployed in many scientific and non-scientific applications where pixel agility via electronic programmability can pull out desired features in an irradiance map subject to the CAOS imaging operation.

  20. A high speed 2D time-to-impact algorithm targeted for smart image sensors

    NASA Astrophysics Data System (ADS)

    Åström, Anders; Forchheimer, Robert

    2014-03-01

    In this paper we present a 2D extension of a previously described 1D method for a time-to-impact sensor [5][6]. As in the earlier paper, the approach is based on measuring time instead of the apparent motion of points in the image plane to obtain data similar to the optical flow. The specific properties of the motion field in the time-to-impact application are used, such as using simple feature points which are tracked from frame to frame. Compared to the 1D case, the features will be proportionally fewer which will affect the quality of the estimation. We give a proposal on how to solve this problem. Results obtained are as promising as those obtained from the 1D sensor.

  1. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)

    1999-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: ##EQU1## wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absoption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  2. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Gayen, Swapan K. (Inventor)

    2000-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absorption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  3. Extending Ripley’s K-Function to Quantify Aggregation in 2-D Grayscale Images

    PubMed Central

    Amgad, Mohamed; Itoh, Anri; Tsui, Marco Man Kin

    2015-01-01

    In this work, we describe the extension of Ripley’s K-function to allow for overlapping events at very high event densities. We show that problematic edge effects introduce significant bias to the function at very high densities and small radii, and propose a simple correction method that successfully restores the function’s centralization. Using simulations of homogeneous Poisson distributions of events, as well as simulations of event clustering under different conditions, we investigate various aspects of the function, including its shape-dependence and correspondence between true cluster radius and radius at which the K-function is maximized. Furthermore, we validate the utility of the function in quantifying clustering in 2-D grayscale images using three modalities: (i) Simulations of particle clustering; (ii) Experimental co-expression of soluble and diffuse protein at varying ratios; (iii) Quantifying chromatin clustering in the nuclei of wt and crwn1 crwn2 mutant Arabidopsis plant cells, using a previously-published image dataset. Overall, our work shows that Ripley’s K-function is a valid abstract statistical measure whose utility extends beyond the quantification of clustering of non-overlapping events. Potential benefits of this work include the quantification of protein and chromatin aggregation in fluorescent microscopic images. Furthermore, this function has the potential to become one of various abstract texture descriptors that are utilized in computer-assisted diagnostics in anatomic pathology and diagnostic radiology. PMID:26636680

  4. New float equivalent calibration method for 2D image measuring system

    NASA Astrophysics Data System (ADS)

    Gou, Jiansong; Wang, Zhong; Lu, Ruijun; Shen, Xinlan

    2015-08-01

    Pixel equivalent is an important parameter to describe the relationship between pixels of digital images and actual size of measured piece in a 2D image measuring system. It is mainly calibrated with the standard component method, which is traditionally off-line and requires measuring conditions and attitude of devices to remain constant while measuring and calibrating. To overcome above limitations, a new calibration method is proposed in this paper which is defined as the float equivalent method. This method requires the standard component and measured piece be placed in image measuring system simultaneously. Everytime before measuring, no matter aiming at the same measuring point or not, the pixel equivalent is calibrated for this specific time, specific condition, specific measuring point, and specific object distance. This method has the advantage of reducing the influence of conditions changing on the accuracy without additional calibration equipment or operations. The steel tape verification system is taken as an example to testify the effectiveness of the method.

  5. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  6. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  7. Absorption and scattering 2-D volcano images from numerically calculated space-weighting functions

    NASA Astrophysics Data System (ADS)

    Del Pezzo, Edoardo; Ibañez, Jesus; Prudencio, Janire; Bianco, Francesca; De Siena, Luca

    2016-08-01

    Short-period small magnitude seismograms mainly comprise scattered waves in the form of coda waves (the tail part of the seismogram, starting after S waves and ending when the noise prevails), spanning more than 70 per cent of the whole seismogram duration. Corresponding coda envelopes provide important information about the earth inhomogeneity, which can be stochastically modeled in terms of distribution of scatterers in a random medium. In suitable experimental conditions (i.e. high earth heterogeneity), either the two parameters describing heterogeneity (scattering coefficient), intrinsic energy dissipation (coefficient of intrinsic attenuation) or a combination of them (extinction length and seismic albedo) can be used to image Earth structures. Once a set of such parameter couples has been measured in a given area and for a number of sources and receivers, imaging their space distribution with standard methods is straightforward. However, as for finite-frequency and full-waveform tomography, the essential problem for a correct imaging is the determination of the weighting function describing the spatial sensitivity of observable data to scattering and absorption anomalies. Due to the nature of coda waves, the measured parameter couple can be seen as a weighted space average of the real parameters characterizing the rock volumes illuminated by the scattered waves. This paper uses the Monte Carlo numerical solution of the Energy Transport Equation to find approximate but realistic 2-D space-weighting functions for coda waves. Separate images for scattering and absorption based on these sensitivity functions are then compared with those obtained with commonly used sensitivity functions in an application to data from an active seismic experiment carried out at Deception Island (Antarctica). Results show that these novel functions are based on a reliable and physically grounded method to image magnitude and shape of scattering and absorption anomalies. Their

  8. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies.

  9. Model-based segmentation and quantification of subcellular structures in 2D and 3D fluorescent microscopy images

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Heinzer, Stephan; Weiss, Matthias; Rohr, Karl

    2008-03-01

    We introduce a model-based approach for segmenting and quantifying GFP-tagged subcellular structures of the Golgi apparatus in 2D and 3D microscopy images. The approach is based on 2D and 3D intensity models, which are directly fitted to an image within 2D circular or 3D spherical regions-of-interest (ROIs). We also propose automatic approaches for the detection of candidates, for the initialization of the model parameters, and for adapting the size of the ROI used for model fitting. Based on the fitting results, we determine statistical information about the spatial distribution and the total amount of intensity (fluorescence) of the subcellular structures. We demonstrate the applicability of our new approach based on 2D and 3D microscopy images.

  10. Coded-aperture imaging in nuclear medicine

    NASA Technical Reports Server (NTRS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-01-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  11. Coded-aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-11-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  12. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  13. Development of 2D imaging of SXR plasma radiation by means of GEM detectors

    NASA Astrophysics Data System (ADS)

    Chernyshova, M.; Czarski, T.; Jabłoński, S.; Kowalska-Strzeciwilk, E.; Poźniak, K.; Kasprowicz, G.; Zabołotny, W.; Wojeński, A.; Byszuk, A.; Burza, M.; Juszczyk, B.; Zienkiewicz, P.

    2014-11-01

    Presented 2D gaseous detector system has been developed and designed to provide energy resolved fast dynamic plasma radiation imaging in the soft X-Ray region with 0.1 kHz exposure frequency for online, made in real time, data acquisition (DAQ) mode. The detection structure is based on triple Gas Electron Multiplier (GEM) amplification structure followed by the pixel readout electrode. The efficiency of detecting unit was adjusted for the radiation energy region of tungsten in high-temperature plasma, the main candidate for the plasma facing material for future thermonuclear reactors. Here we present preliminary laboratory results and detector parameters obtained for the developed system. The operational characteristics and conditions of the detector were designed to work in the X-Ray range of 2-17 keV. The detector linearity was checked using the fluorescence lines of different elements and was found to be sufficient for good photon energy reconstruction. Images of two sources through various screens were performed with an X-Ray laboratory source and 55Fe source showing a good imaging capability. Finally offline stream-handling data acquisition mode has been developed for the detecting system with timing down to the ADC sampling frequency rate (~13 ns), up to 2.5 MHz of exposure frequency, which could pave the way to invaluable physics information about plasma dynamics due to very good time resolving ability. Here we present results of studied spatial resolution and imaging properties of the detector for conditions of laboratory moderate counting rates and high gain.

  14. Ultrasound 2D strain estimator based on image registration for ultrasound elastography

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Torres, Mylin; Kirkpatrick, Stephanie; Curran, Walter J.; Liu, Tian

    2014-03-01

    In this paper, we present a new approach to calculate 2D strain through the registration of the pre- and post-compression (deformation) B-mode image sequences based on an intensity-based non-rigid registration algorithm (INRA). Compared with the most commonly used cross-correlation (CC) method, our approach is not constrained to any particular set of directions, and can overcome displacement estimation errors introduced by incoherent motion and variations in the signal under high compression. This INRA method was tested using phantom and in vivo data. The robustness of our approach was demonstrated in the axial direction as well as the lateral direction where the standard CC method frequently fails. In addition, our approach copes well under large compression (over 6%). In the phantom study, we computed the strain image under various compressions and calculated the signal-to-noise (SNR) and contrast-to-noise (CNS) ratios. The SNR and CNS values of the INRA method were much higher than those calculated from the CC-based method. Furthermore, the clinical feasibility of our approach was demonstrated with the in vivo data from patients with arm lymphedema.

  15. Coronary arteries motion modeling on 2D x-ray images

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Sundar, Hari

    2012-02-01

    During interventional procedures, 3D imaging modalities like CT and MRI are not commonly used due to interference with the surgery and radiation exposure concerns. Therefore, real-time information is usually limited and building models of cardiac motion are difficult. In such case, vessel motion modeling based on 2-D angiography images become indispensable. Due to issues with existing vessel segmentation algorithms and the lack of contrast in occluded vessels, manual segmentation of certain branches is usually necessary. In addition, such occluded branches are the most important vessels during coronary interventions and obtaining motion models for these can greatly help in reducing the procedure time and radiation exposure. Segmenting different cardiac phases independently does not guarantee temporal consistency and is not efficient for occluded branches required manual segmentation. In this paper, we propose a coronary motion modeling system which extracts the coronary tree for every cardiac phase, maintaining the segmentation by tracking the coronary tree during the cardiac cycle. It is able to map every frame to the specific cardiac phase, thereby inferring the shape information of the coronary arteries using the model corresponding to its phase. Our experiments show that our motion modeling system can achieve promising results with real-time performance.

  16. Measurement of food volume based on single 2-D image without conventional camera calibration.

    PubMed

    Yue, Yaofeng; Jia, Wenyan; Sun, Mingui

    2012-01-01

    Food portion size measurement combined with a database of calories and nutrients is important in the study of metabolic disorders such as obesity and diabetes. In this work, we present a convenient and accurate approach to the calculation of food volume by measuring several dimensions using a single 2-D image as the input. This approach does not require the conventional checkerboard based camera calibration since it is burdensome in practice. The only prior requirements of our approach are: 1) a circular container with a known size, such as a plate, a bowl or a cup, is present in the image, and 2) the picture is taken under a reasonable assumption that the camera is always held level with respect to its left and right sides and its lens is tilted down towards foods on the dining table. We show that, under these conditions, our approach provides a closed form solution to camera calibration, allowing convenient measurement of food portion size using digital pictures.

  17. Numerical model of water flow and solute accumulation in vertisols using HYDRUS 2D/3D code

    NASA Astrophysics Data System (ADS)

    Weiss, Tomáš; Dahan, Ofer; Turkeltub, Tuvia

    2015-04-01

    boundary to the wall of the crack (so that the solute can accumulate due to evaporation on the crack block wall, and infiltrating fresh water can push the solute further down) - in order to do so, HYDRUS 2D/3D code had to be modified by its developers. Unconventionally, the main fitting parameters were: parameter a and n in the soil water retention curve and saturated hydraulic conductivity. The amount of infiltrated water (within a reasonable range), the infiltration function in the crack and the actual evaporation from the crack were also used as secondary fitting parameters. The model supports the previous findings that significant amount (~90%) of water from rain events must infiltrate through the crack. It was also noted that infiltration from the crack has to be increasing with depth and that the highest infiltration rate should be somewhere between 1-3m. This paper suggests a new way how to model vertisols in semi-arid regions. It also supports the previous findings about vertisols: especially, the utmost importance of soil cracks as preferential pathways for water and contaminants and soil cracks as deep evaporators.

  18. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, Vinay A.; Farvardin, Nariman

    1990-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimiaation of a scheme based on the 2-D block cosine transorm when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noise channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  19. Optimal block cosine transform image coding for noisy channels

    NASA Technical Reports Server (NTRS)

    Vaishampayan, V.; Farvardin, N.

    1986-01-01

    The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.

  20. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  1. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional

  2. Self-calibration of cone-beam CT geometry using 3D-2D image registration.

    PubMed

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-04-07

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  3. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  4. Determining ice water content from 2D crystal images in convective cloud systems

    NASA Astrophysics Data System (ADS)

    Leroy, Delphine; Coutris, Pierre; Fontaine, Emmanuel; Schwarzenboeck, Alfons; Strapp, J. Walter

    2016-04-01

    Cloud microphysical in-situ instrumentation measures bulk parameters like total water content (TWC) and/or derives particle size distributions (PSD) (utilizing optical spectrometers and optical array probes (OAP)). The goal of this work is to introduce a comprehensive methodology to compute TWC from OAP measurements, based on the dataset collected during recent HAIC (High Altitude Ice Crystals)/HIWC (High Ice Water Content) field campaigns. Indeed, the HAIC/HIWC field campaigns in Darwin (2014) and Cayenne (2015) provide a unique opportunity to explore the complex relationship between cloud particle mass and size in ice crystal environments. Numerous mesoscale convective systems (MCSs) were sampled with the French Falcon 20 research aircraft at different temperature levels from -10°C up to 50°C. The aircraft instrumentation included an IKP-2 (isokinetic probe) to get reliable measurements of TWC and the optical array probes 2D-S and PIP recording images over the entire ice crystal size range. Based on the known principle relating crystal mass and size with a power law (m=α•Dβ), Fontaine et al. (2014) performed extended 3D crystal simulations and thereby demonstrated that it is possible to estimate the value of the exponent β from OAP data, by analyzing the surface-size relationship for the 2D images as a function of time. Leroy et al. (2015) proposed an extended version of this method that produces estimates of β from the analysis of both the surface-size and perimeter-size relationships. Knowing the value of β, α then is deduced from the simultaneous IKP-2 TWC measurements for the entire HAIC/HIWC dataset. The statistical analysis of α and β values for the HAIC/HIWC dataset firstly shows that α is closely linked to β and that this link changes with temperature. From these trends, a generalized parameterization for α is proposed. Finally, the comparison with the initial IKP-2 measurements demonstrates that the method is able to predict TWC values

  5. Image Outlier Detection and Feature Extraction via L1-Norm-Based 2D Probabilistic PCA.

    PubMed

    Ju, Fujiao; Sun, Yanfeng; Gao, Junbin; Hu, Yongli; Yin, Baocai

    2015-12-01

    This paper introduces an L1-norm-based probabilistic principal component analysis model on 2D data (L1-2DPPCA) based on the assumption of the Laplacian noise model. The Laplacian or L1 density function can be expressed as a superposition of an infinite number of Gaussian distributions. Under this expression, a Bayesian inference can be established based on the variational expectation maximization approach. All the key parameters in the probabilistic model can be learned by the proposed variational algorithm. It has experimentally been demonstrated that the newly introduced hidden variables in the superposition can serve as an effective indicator for data outliers. Experiments on some publicly available databases show that the performance of L1-2DPPCA has largely been improved after identifying and removing sample outliers, resulting in more accurate image reconstruction than the existing PCA-based methods. The performance of feature extraction of the proposed method generally outperforms other existing algorithms in terms of reconstruction errors and classification accuracy.

  6. Large resistive 2D Micromegas with genetic multiplexing and some imaging applications

    NASA Astrophysics Data System (ADS)

    Bouteille, S.; Attié, D.; Baron, P.; Calvet, D.; Magnier, P.; Mandjavidze, I.; Procureur, S.; Riallot, M.

    2016-10-01

    The performance of the first large resistive Micromegas detectors with 2D readout and genetic multiplexing is presented. These detectors have a 50 × 50cm2 active area and are equipped with 1024 strips both in X- and Y-directions. The same genetic multiplexing pattern is applied on both coordinates, resulting in the compression of signals on 2 × 61 readout channels. Four such detectors have been built at CERN, and extensively tested with cosmics. The resistive strip film allows for very high gain operation, compensating for the charge spread on the 2 dimensions as well as the S / N loss due to the huge, 1 nF input capacitance. This film also creates a significantly different signal shape in the X- and Y-coordinates due to the charge evacuation along the resistive strips. All in all a detection efficiency above 95% is achieved with a 1 cm drift gap. Though not yet optimal, the measured 300 μm spatial resolution allows for very precise imaging in the field of muon tomography, and some applications of these detectors are presented.

  7. A 2D MTF approach to evaluate and guide dynamic imaging developments

    PubMed Central

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W. Scott; Madore, Bruno

    2010-01-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a 2D modulation transfer function (MTF), an easy-to-interpret visual rendering of a method’s ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R=4 and 8), and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factor of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity-encoding (k-t SENSE), unaliasing by Fourier encoding the overlaps in the temporal dimension-SENSE (UNFOLD-SENSE), generalized autocalibrating partially parallel acquisition (GRAPPA), sensitivity profiles from an array of coils for encoding and reconstruction in parallel (SPACE-RIP), self, hybrid referencing with UNFOLD and GRAPPA (SHRUG) and GRAPPA-enhanced sensitivity maps for SENSE reconstructions (GEYSER). PMID:19877276

  8. Craniosynostosis: prenatal diagnosis by 2D/3D ultrasound, magnetic resonance imaging and computed tomography.

    PubMed

    Helfer, Talita Micheletti; Peixoto, Alberto Borges; Tonni, Gabriele; Araujo Júnior, Edward

    2016-09-01

    Craniosynostosis is defined as the process of premature fusion of one or more of the cranial sutures. It is a common condition that occurs in about 1 to 2,000 live births. Craniosynostosis may be classified in primary or secondary. It is also classified as nonsyndromic or syndromic. According to suture commitment, craniosynostosis may affect a single suture or multiple sutures. There is a wide range of syndromes involving craniosynostosis and the most common are Apert, Pffeifer, Crouzon, Shaethre-Chotzen and Muenke syndromes. The underlying etiology of nonsyndromic craniosynostosis is unknown. Mutations in the fibroblast growth factor (FGF) signalling pathway play a crucial role in the etiology of craniosynostosis syndromes. Prenatal ultrasound`s detection rate of craniosynostosis is low. Nowadays, different methods can be applied for prenatal diagnosis of craniosynostosis, such as two-dimensional (2D) and three-dimensional (3D) ultrasound, magnetic resonance imaging (MRI), computed tomography (CT) scan and, finally, molecular diagnosis. The presence of craniosynostosis may affect the birthing process. Fetuses with craniosynostosis also have higher rates of perinatal complications. In order to avoid the risks of untreated craniosynostosis, children are usually treated surgically soon after postnatal diagnosis.

  9. Substorm development as observed by Interball UV imager and 2-D magnetic array

    NASA Astrophysics Data System (ADS)

    Lyatsky, W.; Cogger, L. L.; Jackel, B.; Hamza, A. M.; Hughes, W. J.; Murr, D.; Rasmussen, O.

    2001-10-01

    Results of the study of two substorms from Interball auroral UV measurements and two-dimensional patterns of equivalent ionospheric currents derived from the MACCS/CANOPUS and Greenland magnetometer arrays are presented. Substorm development in 2-D equivalent ionospheric current patterns may be described in terms of the formation of two vortices in the equivalent currents: a morning vortex related to downward field-aligned current and an evening vortex related to upward field-aligned current. Poleward propagation of the magnetic disturbances during substorm expansive phase was found to be associated mainly with a poleward displacement of the morning vortex, whereas the evening vortex remained approximately at the same position. As a result, the initial quasi-azimuthal separation of the vortices was replaced by their quasi-meridional separation at substorm maximum. Interball UV images during this period showed the formation of a bright auroral border at the poleward edge of substorm auroral bulge. The auroral UV images showed also that the auroral distribution in the region between the polar border and the main auroral oval tends to have a form of bubbles or petals growing from a bright protuberant region on the equatorward boundary of the auroral oval. However, the resolution of the UV imager was not sufficient for the reliable separation of such the structures, therefore, this result should be considered as preliminary. Overlapping of the auroral UV images onto equivalent current patterns shows that the bright substorm surge was well collocated with the evening vortex whereas the poleward auroral border did not coincide with any evident feature in equivalent ionospheric currents and was located several degrees equatorward of the morning current vortex center related to downward field-aligned current. The ground-based magnetic array allowing us to obtain instantaneous patterns of equivalent ionospheric currents gives a possibility to propose a new index for

  10. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.

  11. Parametric phase information based 2D Cepstrum PSF estimation method for blind de-convolution of ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Kang, Jooyoung; Park, Sung-Chan; Kim, Jung-ho; Song, Jongkeun

    2014-02-01

    In the ultrasound imaging system, blurring which occurs after passing through ultrasound scanner system, represents point spread function (PSF) that describes the response of the ultrasound imaging system to a point source distribution. So, de-blurring can be achieved by de-convolving the ultrasound images with an estimated of corresponding PSF. However, it is hard to attain an accurate estimation of PSF due to the unknown properties of the tissues of the human body through the ultrasound signal propagates. In this paper, we present a new method for PSF estimation in the Fourier domain (FD) based on parametric minimum phase information, and simultaneously, it performs fast 2D de-convolution in the ultrasound imaging system. Although most of complex cepstrum methods [14], are obtained using complex 2D phase unwrapping [18] [19] in order to estimate the FD-phase information of PSF, our algorithm estimates the 2D PSF using 2D FD-phase information with the parametric weighting factor α and β. They affect the feature of PSF shapes.This makes the computations much simpler and the estimation more accurate. Our algorithm works on the beam-formed uncompressed radio-frequency data, with pre-measured and estimated 2D PSFs database from actual probe used. We have tested our algorithm with vera-sonic system and commercial ultrasound scanner (Philips C4-2), in known speed of sound phantoms and unknown speeds in vivo scans.

  12. Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner.

    PubMed

    Shah, Aj; Wollak, C; Shah, J B

    2013-12-01

    The statistics on the growing number of non-healing wounds is alarming. In the United States, chronic wounds affect 6.5 million patients. An estimated US $25 billion is spent annually on treatment of chronic wounds and the burden is rapidly growing due to increasing health care costs, an aging population and a sharp rise in the incidence of diabetes and obesity worldwide.(1) Accurate wound measurement techniques will help health care personnel to monitor the wounds which will indirectly help improving care.(7,9) The clinical practice of measuring wounds has not improved even today.(2,3) A common method like the ruler method to measure wounds has poor interrater and intrarater reliability.(2,3) Measuring the greatest length by the greatest width perpendicular to the greatest length, the perpendicular method, is more valid and reliable than other ruler based methods.(2) Another common method like acetate tracing is more accurate than the ruler method but still has its disadvantages. These common measurement techniques are time consuming with variable inaccuracies. In this study, volumetric measurements taken with a non-contact 3-D scanner are benchmarked against the common ruler method, acetate grid tracing, and 2-D image planimetry volumetric measurement technique. A liquid volumetric fill method is used as the control volume. Results support the hypothesis that the 3-D scanner consistently shows accurate volumetric measurements in comparison to standard volumetric measurements obtained by the waterfill technique (average difference of 11%). The 3-D scanner measurement technique was found more reliable and valid compared to other three techniques, the ruler method (average difference of 75%), acetate grid tracing (average difference of 41%), and 2D planimetric measurements (average difference of 52%). Acetate tracing showed more accurate measurements compared to the ruler method (average difference of 41% (acetate tracing) compared to 75% (ruler method)). Improving

  13. Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner

    PubMed Central

    Shah, Aj; Wollak, C.; Shah, J.B.

    2015-01-01

    The statistics on the growing number of non-healing wounds is alarming. In the United States, chronic wounds affect 6.5 million patients. An estimated US $25 billion is spent annually on treatment of chronic wounds and the burden is rapidly growing due to increasing health care costs, an aging population and a sharp rise in the incidence of diabetes and obesity worldwide.1 Accurate wound measurement techniques will help health care personnel to monitor the wounds which will indirectly help improving care.7,9 The clinical practice of measuring wounds has not improved even today.2,3 A common method like the ruler method to measure wounds has poor interrater and intrarater reliability.2,3 Measuring the greatest length by the greatest width perpendicular to the greatest length, the perpendicular method, is more valid and reliable than other ruler based methods.2 Another common method like acetate tracing is more accurate than the ruler method but still has its disadvantages. These common measurement techniques are time consuming with variable inaccuracies. In this study, volumetric measurements taken with a non-contact 3-D scanner are benchmarked against the common ruler method, acetate grid tracing, and 2-D image planimetry volumetric measurement technique. A liquid volumetric fill method is used as the control volume. Results support the hypothesis that the 3-D scanner consistently shows accurate volumetric measurements in comparison to standard volumetric measurements obtained by the waterfill technique (average difference of 11%). The 3-D scanner measurement technique was found more reliable and valid compared to other three techniques, the ruler method (average difference of 75%), acetate grid tracing (average difference of 41%), and 2D planimetric measurements (average difference of 52%). Acetate tracing showed more accurate measurements compared to the ruler method (average difference of 41% (acetate tracing) compared to 75% (ruler method)). Improving the

  14. Multifractal and Singularity Maps of soil surface moisture distribution derived from 2D image analysis.

    NASA Astrophysics Data System (ADS)

    Cumbrera, Ramiro; Millán, Humberto; Martín-Sotoca, Juan Jose; Pérez Soto, Luis; Sanchez, Maria Elena; Tarquis, Ana Maria

    2016-04-01

    methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Cumbrera, R., Ana M. Tarquis, Gabriel Gascó, Humberto Millán (2012) Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images. Journal of Hydrology (452-453), 205-212. Martin Sotoca; J.J. Antonio Saa-Requejo, Juan Grau and Ana M. Tarquis (2016). Segmentation of singularity maps in the context of soil porosity. Geophysical Research Abstracts, 18, EGU2016-11402. Millán, H., Cumbrera, R. and Ana M. Tarquis (2016) Multifractal and Levy-stable statistics of soil surface moisture distribution derived from 2D image analysis. Applied Mathematical Modelling, 40(3), 2384-2395.

  15. Application of conformal map theory for design of 2-D ultrasonic array structure for NDT imaging application: a feasibility study.

    PubMed

    Ramadas, Sivaram N; Jackson, Joseph C; Dziewierz, Jerzy; O'Leary, Richard; Gachagan, Anthony

    2014-03-01

    Two-dimensional ultrasonic phased arrays are becoming increasingly popular in nondestructive evaluation (NDE). Sparse array element configurations are required to fully exploit the potential benefits of 2-D phased arrays. This paper applies the conformal mapping technique as a means of designing sparse 2-D array layouts for NDE applications. Modeling using both Huygens' field prediction theory and 2-D fast Fourier transformation is employed to study the resulting new structure. A conformal power map was used that, for fixed beam width, was shown in simulations to have a greater contrast than rectangular or random arrays. A prototype aperiodic 2-D array configuration for direct contact operation in steel, with operational frequency ~3 MHz, was designed using the array design principle described in this paper. Experimental results demonstrate a working sparse-array transducer capable of performing volumetric imaging.

  16. Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Lerakis, Stamatios; Wagner, Mary B.; Fei, Baowei

    2015-03-01

    Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases.

  17. 2D harmonic filtering of MR phase images in multicenter clinical setting: toward a magnetic signature of cerebral microbleeds.

    PubMed

    Kaaouana, Takoua; de Rochefort, Ludovic; Samaille, Thomas; Thiery, Nathalie; Dufouil, Carole; Delmaire, Christine; Dormont, Didier; Chupin, Marie

    2015-01-01

    Cerebral microbleeds (CMBs) have emerged as a new imaging marker of small vessel disease. Composed of hemosiderin, CMBs are paramagnetic and can be detected with MRI sequences sensitive to magnetic susceptibility (typically, gradient recalled echo T2* weighted images). Nevertheless, their identification remains challenging on T2* magnitude images because of confounding structures and lesions. In this context, T2* phase image may play a key role in better characterizing CMBs because of its direct relationship with local magnetic field variations due to magnetic susceptibility difference. To address this issue, susceptibility-based imaging techniques were proposed, such as Susceptibility Weighted Imaging (SWI) and Quantitative Susceptibility Mapping (QSM). But these techniques have not yet been validated for 2D clinical data in multicenter settings. Here, we introduce 2DHF, a fast 2D phase processing technique embedding both unwrapping and harmonic filtering designed for data acquired in 2D, even with slice-to-slice inconsistencies. This method results in internal field maps which reveal local field details due to magnetic inhomogeneity within the region of interest only. This technique is based on the physical properties of the induced magnetic field and should yield consistent results. A synthetic phantom was created for numerical simulations. It simulates paramagnetic and diamagnetic lesions within a 'brain-like' tissue, within a background. The method was evaluated on both this synthetic phantom and multicenter 2D datasets acquired in standardized clinical setting, and compared with two state-of-the-art methods. It proved to yield consistent results on synthetic images and to be applicable and robust on patient data. As a proof-of-concept, we finally illustrate that it is possible to find a magnetic signature of CMBs and CMCs on internal field maps generated with 2DHF on 2D clinical datasets that give consistent results with CT-scans in a subsample of 10 subjects

  18. Effect of segmentation errors on 3D-to-2D registration of implant models in X-ray images.

    PubMed

    Mahfouz, Mohamed R; Hoff, William A; Komistek, Richard D; Dennis, Douglas A

    2005-02-01

    In many biomedical applications, it is desirable to estimate the three-dimensional (3D) position and orientation (pose) of a metallic rigid object (such as a knee or hip implant) from its projection in a two-dimensional (2D) X-ray image. If the geometry of the object is known, as well as the details of the image formation process, then the pose of the object with respect to the sensor can be determined. A common method for 3D-to-2D registration is to first segment the silhouette contour from the X-ray image; that is, identify all points in the image that belong to the 2D silhouette and not to the background. This segmentation step is then followed by a search for the 3D pose that will best match the observed contour with a predicted contour. Although the silhouette of a metallic object is often clearly visible in an X-ray image, adjacent tissue and occlusions can make the exact location of the silhouette contour difficult to determine in places. Occlusion can occur when another object (such as another implant component) partially blocks the view of the object of interest. In this paper, we argue that common methods for segmentation can produce errors in the location of the 2D contour, and hence errors in the resulting 3D estimate of the pose. We show, on a typical fluoroscopy image of a knee implant component, that interactive and automatic methods for segmentation result in segmented contours that vary significantly. We show how the variability in the 2D contours (quantified by two different metrics) corresponds to variability in the 3D poses. Finally, we illustrate how traditional segmentation methods can fail completely in the (not uncommon) cases of images with occlusion.

  19. Coupled 2-dimensional cascade theory for noise an d unsteady aerodynamics of blade row interaction in turbofans. Volume 2: Documentation for computer code CUP2D

    NASA Technical Reports Server (NTRS)

    Hanson, Donald B.

    1994-01-01

    A two dimensional linear aeroacoustic theory for rotor/stator interaction with unsteady coupling was derived and explored in Volume 1 of this report. Computer program CUP2D has been written in FORTRAN embodying the theoretical equations. This volume (Volume 2) describes the structure of the code, installation and running, preparation of the input file, and interpretation of the output. A sample case is provided with printouts of the input and output. The source code is included with comments linking it closely to the theoretical equations in Volume 1.

  20. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  1. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  2. Spectroscopic-tomography of biological membrane with high-spatial resolution by the imaging-type 2D Fourier spectroscopy

    NASA Astrophysics Data System (ADS)

    Inui, Asuka; Tsutsumi, Ryosuke; Qi, Wei; Takuma, Takashi; Ishimaru, Ichirou

    2011-07-01

    We proposed the imaging-type 2-dimensional Fourier spectroscopy that is the phase-shift interferometry between the objective lights. The proposed method can measure the 2D spectral image at the limited depth. Because of the imaging optical system, the 2D spectral images can be measured in high spatial resolution. And in the depth direction, we can get the spectral distribution only in the focal plane. In this report, we mention about the principle of the proposed wide field imaging-type 2D Fourier spectroscopy. And, we obtained the spectroscopic tomography of biological tissue of mouse's ear. In the visible region, we confirmed the difference of spectral characteristics between blood vessel region and other region. In the near infrared region (λ=900nm~1700nm), we can obtain the high-contrast blood vessel image of mouse's ear in the deeper part by InGaAs camera. Furthermore, in the middle infrared region(λ=8μ~14μm), we have successfully measured the radiation spectroscopic-imaging with wild field of view by the infrared module, such as the house plants. Additionally, we propose correction geometrical model that can convert the mechanical phase-shift value into the substantial phase difference in each oblique optical axes. We successfully verified the effectiveness of the proposed correction geometrical model and can reduce the spectral error into the error range into +/-3nm using the He-Ne laser whose wavelength 632.8nm.

  3. 2D/4D marker-free tumor tracking using 4D CBCT as the reference image.

    PubMed

    Wang, Mengjiao; Sharp, Gregory C; Rit, Simon; Delmon, Vivien; Wang, Guangzhi

    2014-05-07

    Tumor motion caused by respiration is an important issue in image-guided radiotherapy. A 2D/4D matching method between 4D volumes derived from cone beam computed tomography (CBCT) and 2D fluoroscopic images was implemented to track the tumor motion without the use of implanted markers. In this method, firstly, 3DCBCT and phase-rebinned 4DCBCT are reconstructed from cone beam acquisition. Secondly, 4DCBCT volumes and a streak-free 3DCBCT volume are combined to improve the image quality of the digitally reconstructed radiographs (DRRs). Finally, the 2D/4D matching problem is converted into a 2D/2D matching between incoming projections and DRR images from each phase of the 4DCBCT. The diaphragm is used as a target surrogate for matching instead of using the tumor position directly. This relies on the assumption that if a patient has the same breathing phase and diaphragm position as the reference 4DCBCT, then the tumor position is the same. From the matching results, the phase information, diaphragm position and tumor position at the time of each incoming projection acquisition can be derived. The accuracy of this method was verified using 16 candidate datasets, representing lung and liver applications and one-minute and two-minute acquisitions. The criteria for the eligibility of datasets were described: 11 eligible datasets were selected to verify the accuracy of diaphragm tracking, and one eligible dataset was chosen to verify the accuracy of tumor tracking. The diaphragm matching accuracy was 1.88 ± 1.35 mm in the isocenter plane and the 2D tumor tracking accuracy was 2.13 ± 1.26 mm in the isocenter plane. These features make this method feasible for real-time marker-free tumor motion tracking purposes.

  4. 3D reconstruction of light flux distribution on arbitrary surfaces from 2D multi-photographic images.

    PubMed

    Chen, Xueli; Gao, Xinbo; Chen, Duofang; Ma, Xiaopeng; Zhao, Xiaohui; Shen, Man; Li, Xiangsi; Qu, Xiaochao; Liang, Jimin; Ripoll, Jorge; Tian, Jie

    2010-09-13

    Optical tomography can demonstrate accurate three-dimensional (3D) imaging that recovers the 3D spatial distribution and concentration of the luminescent probes in biological tissues, compared with planar imaging. However, the tomographic approach is extremely difficult to implement due to the complexity in the reconstruction of 3D surface flux distribution from multi-view two dimensional (2D) measurements on the subject surface. To handle this problem, a novel and effective method is proposed in this paper to determine the surface flux distribution from multi-view 2D photographic images acquired by a set of non-contact detectors. The method is validated with comparison experiments involving both regular and irregular surfaces. Reconstruction of the inside probes based on the reconstructed surface flux distribution further demonstrates the potential of the proposed method in its application in optical tomography.

  5. Estimation of 3-D pore network coordination number of rocks from watershed segmentation of a single 2-D image

    NASA Astrophysics Data System (ADS)

    Rabbani, Arash; Ayatollahi, Shahab; Kharrat, Riyaz; Dashti, Nader

    2016-08-01

    In this study, we have utilized 3-D micro-tomography images of real and synthetic rocks to introduce two mathematical correlations which estimate the distribution parameters of 3-D coordination number using a single 2-D cross-sectional image. By applying a watershed segmentation algorithm, it is found that the distribution of 3-D coordination number is acceptably predictable by statistical analysis of the network extracted from 2-D images. In this study, we have utilized 25 volumetric images of rocks in order to propose two mathematical formulas. These formulas aim to approximate the average and standard deviation of coordination number in 3-D pore networks. Then, the formulas are applied for five independent test samples to evaluate the reliability. Finally, pore network flow modeling is used to find the error of absolute permeability prediction using estimated and measured coordination numbers. Results show that the 2-D images are considerably informative about the 3-D network of the rocks and can be utilized to approximate the 3-D connectivity of the porous spaces with determination coefficient of about 0.85 that seems to be acceptable considering the variety of the studied samples.

  6. DSD2D-FLS 2010: Bdzil's 2010 DSD Code Base; Computing tb and Dn with Edits to Reduce the Noise in the Dn Field Near HE Boundaries

    SciTech Connect

    Bdzil, John Bohdan

    2016-09-21

    The full level-set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local “customer,” the only description of the explosives’ boundary was through volume fraction data. Given this requirement, the accuracy issues I had encountered with our “fast-tube,” narrowband, DSD2D solver, and the difficulty we had building an efficient MPI-parallel version of the narrowband DSD2D, I decided DSD3D should be built as a full level-set function code, using a totally local DSD boundary condition algorithm for the level-­set function, phi, which did not rely on the gradient of the level-­set function being one, |grad(phi)| = 1. The narrowband DSD2D solver was built on the assumption that |grad(phi)| could be driven to one, and near the boundaries of the explosive this condition was not being satisfied. Since the narrowband is typically no more than10*dx wide, narrowband methods are discrete methods with a fixed, non-­resolvable error, where the error is related to the thickness of the band: the narrower the band the larger the errors. Such a solution represents a discrete approximation to the true solution and does not limit to the solution of the underlying PDEs under grid resolution.The full level-­set function code, DSD3D, is fully described in LA-14336 (2007) [1]. This ASCI-­supported, DSD code project was the last such LANL DSD code project that I was involved with before my retirement in 2007. My part in the project was to design and build the core DSD3D solver, which was to include a robust DSD boundary condition treatment. A robust boundary condition treatment was required, since for an important local

  7. FACET: a radiation view factor computer code for axisymmetric, 2D planar, and 3D geometries with shadowing

    SciTech Connect

    Shapiro, A.B.

    1983-08-01

    The computer code FACET calculates the radiation geometric view factor (alternatively called shape factor, angle factor, or configuration factor) between surfaces for axisymmetric, two-dimensional planar and three-dimensional geometries with interposed third surface obstructions. FACET was developed to calculate view factors for input to finite-element heat-transfer analysis codes. The first section of this report is a brief review of previous radiation-view-factor computer codes. The second section presents the defining integral equation for the geometric view factor between two surfaces and the assumptions made in its derivation. Also in this section are the numerical algorithms used to integrate this equation for the various geometries. The third section presents the algorithms used to detect self-shadowing and third-surface shadowing between the two surfaces for which a view factor is being calculated. The fourth section provides a user's input guide followed by several example problems.

  8. Assessment of some problematic factors in facial image identification using a 2D/3D superimposition technique.

    PubMed

    Atsuchi, Masaru; Tsuji, Akiko; Usumoto, Yosuke; Yoshino, Mineo; Ikeda, Noriaki

    2013-09-01

    The number of criminal cases requiring facial image identification of a suspect has been increasing because a surveillance camera is installed everywhere in the city and furthermore, the intercom with the recording function is installed in the home. In this study, we aimed to analyze the usefulness of a 2D/3D facial image superimposition system for image identification when facial aging, facial expression, and twins are under consideration. As a result, the mean values of the average distances calculated from the 16 anatomical landmarks between the 3D facial images of the 50s groups and the 2D facial images of the 20s, 30s, and 40s groups were 2.6, 2.3, and 2.2mm, respectively (facial aging). The mean values of the average distances calculated from 12 anatomical landmarks between the 3D normal facial images and four emotional expressions were 4.9 (laughter), 2.9 (anger), 2.9 (sadness), and 3.6mm (surprised), respectively (facial expressions). The average distance obtained from 11 anatomical landmarks between the same person in twins was 1.1mm, while the average distance between different person in twins was 2.0mm (twins). Facial image identification using the 2D/3D facial image superimposition system demonstrated adequate statistical power and identified an individual with high accuracy, suggesting its usefulness. However, computer technology concerning video image processing and superimpose progress, there is a need to keep familiar with the morphology and anatomy as its base.

  9. 2D Transducer Array for High-Speed 3D Imaging System

    DTIC Science & Technology

    2007-11-02

    low voltage differential signaling ( LVDS ) interface and a peripheral component interconnect (PCI) bus. The maximum numbers of transmission and...32-channel analog to digital converter (ADC) was attached to the developed transducer array. LVDS 2D Array Front End D a t a A c q u i s i t i o

  10. Coherent diffractive imaging using randomly coded masks

    SciTech Connect

    Seaberg, Matthew H.; D'Aspremont, Alexandre; Turner, Joshua J.

    2015-12-07

    We experimentally demonstrate an extension to coherent diffractive imaging that encodes additional information through the use of a series of randomly coded masks, removing the need for typical object-domain constraints while guaranteeing a unique solution to the phase retrieval problem. Phase retrieval is performed using a numerical convex relaxation routine known as “PhaseCut,” an iterative algorithm known for its stability and for its ability to find the global solution, which can be found efficiently and which is robust to noise. The experiment is performed using a laser diode at 532.2 nm, enabling rapid prototyping for future X-ray synchrotron and even free electron laser experiments.

  11. Coding depth perception from image defocus.

    PubMed

    Supèr, Hans; Romeo, August

    2014-12-01

    As a result of the spider experiments in Nagata et al. (2012), it was hypothesized that the depth perception mechanisms of these animals should be based on how much images are defocused. In the present paper, assuming that relative chromatic aberrations or blur radii values are known, we develop a formulation relating the values of these cues to the actual depth distance. Taking into account the form of the resulting signals, we propose the use of latency coding from a spiking neuron obeying Izhikevich's 'simple model'. If spider jumps can be viewed as approximately parabolic, some estimates allow for a sensory-motor relation between the time to the first spike and the magnitude of the initial velocity of the jump.

  12. Integration of 3D and 2D imaging data for assured navigation in unknown environments: initial steps

    NASA Astrophysics Data System (ADS)

    Dill, Evan; Uijt de Haag, Maarten

    2009-05-01

    This paper discusses the initial steps of the development of a novel navigation method that integrates three-dimensional (3D) point cloud data, two-dimensional (2D) gray-level (intensity), and data from an Inertial Measurement Unit (IMU). A time-of-flight camera such as MESA's Swissranger will output both the 3D and 2D data. The target application is position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground vehicles (AGV) in urban or indoor environments. In urban and indoor environments a GPS position capability may not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial or deception. The proposed algorithm extracts key features such as planar surfaces, lines and corner-points from both the 3D (point-cloud) and 2D (intensity) imagery. Consecutive observations of corresponding features in the 3D and 2D image frames are then used to compute estimates of position and orientation changes. Since the use of 3D image features for positioning suffers from limited feature observability resulting in deteriorated position accuracies, and the 2D imagery suffers from an unknown depth when estimating the pose from consecutive image frames, it is expected that the integration of both data sets will alleviate the problems with the individual methods resulting in an position and attitude determination method with a high level of assurance. An Inertial Measurement Unit (IMU) is used to set up the tracking gates necessary to perform data association of the features in consecutive frames. Finally, the position and orientation change estimates can be used to correct for the IMU drift errors.

  13. SU-E-T-431: Feasiblity of Using CT Scout Images for 2D LDR Brachytherpay Planning

    SciTech Connect

    Ha, J; Weaver, R

    2015-06-15

    Purpose: i) To show the feasibility of using CT scout images for 2D low-dose rate brachytherapy planning with BrachyVision (version 10.4); ii) to show their advantages and disadvantages over DRRs. Methods: A phantom was constructed to house a Fletcher-Suite applicator. The phantom is made of Styrofoam with metal BBs positioned at well-defined separations. These markers are used to assess the image distortion in the scout images. Unlike DRRs, scout images are distorted only in the direction normal to the couch direction; therefore, they needed to be scaled unidirectionally prior to importing into BrachyVision. In addition to confirming the scaling is performed correctly by measuring distances between well-positioned BB, we also compare a LDR plan using scout images to a 3D CT-based plan. Results: There is no distortion of the image along the couch direction due to the collimation of the CT scanner. The distortion in the transverse plane can be corrected by multiplying by the ratio of distances between source-to-isocenter and source-to-detector. The results show the distance separations between BBs as measured in scout images and by a caliber are within a few millimeters. Dosimetrically, the difference between the dose rates to points A and B based on scout images and on 3D CT are less than a few percents. The accuracy can be improved by correcting for the distortion on the transverse plane. Conclusion: It is possible to use CT scout images for 2D planning in BrachyVision. This is an advantage because scout images have no metal artifacts often present in CT images or DRRs. Another advantage is the lack of distortion in the couch direction. One major disadvantage is that the image distortion due to beam divergence can be large. This is due to the inherent short distance between source-to-isocenter and source-to-detector on a CT scanner.

  14. Simulation of decay heat removal by natural convection in a pool type fast reactor model-ramona-with coupled 1D/2D thermal hydraulic code system

    SciTech Connect

    Kasinathan, N.; Rajakumar, A.; Vaidyanathan, G.; Chetal, S.C.

    1995-09-01

    Post shutdown decay heat removal is an important safety requirement in any nuclear system. In order to improve the reliability of this function, Liquid metal (sodium) cooled fast breeder reactors (LMFBR) are equipped with redundant hot pool dipped immersion coolers connected to natural draught air cooled heat exchangers through intermediate sodium circuits. During decay heat removal, flow through the core, immersion cooler primary side and in the intermediate sodium circuits are also through natural convection. In order to establish the viability and validate computer codes used in making predictions, a 1:20 scale experimental model called RAMONA with water as coolant has been built and experimental simulation of decay heat removal situation has been performed at KfK Karlsruhe. Results of two such experiments have been compiled and published as benchmarks. This paper brings out the results of the numerical simulation of one of the benchmark case through a 1D/2D coupled code system, DHDYN-1D/THYC-2D and the salient features of the comparisons. Brief description of the formulations of the codes are also included.

  15. TRAC code assessment using data from SCTF Core-III, a large-scale 2D/3D facility

    SciTech Connect

    Boyack, B.E.; Shire, P.R.; Harmony, S.C.; Rhee, G.

    1988-01-01

    Nine tests from the SCTF Core-III configuration have been analyzed using TRAC-PF1/MOD1. The objectives of these assessment activities were to obtain a better understanding of the phenomena occurring during the refill and reflood phases of a large-break loss-of-coolant accident, to determine the accuracy to which key parameters are calculated, and to identify deficiencies in key code correlations and models that provide closure for the differential equations defining thermal-hydraulic phenomena in pressurized water reactors. Overall, the agreement between calculated and measured values of peak cladding temperature is reasonable. In addition, TRAC adequately predicts many of the trends observed in both the integral effect and separate effect tests conducted in SCTF Core-III. The importance of assessment activities that consider potential contributors to discrepancies between the measured and calculated results arising from three sources are described as those related to (1) knowledge about the facility configuration and operation, (2) facility modeling for code input, and (3) deficiencies in code correlations and models. An example is provided. 8 refs., 7 figs., 2 tabs.

  16. ZORNOC: a 1 1/2-D tokamak data analysis code for studying noncircular high beta plasmas

    SciTech Connect

    Zurro, B.; Wieland, R.M.; Murakami, M.; Swain, D.W.

    1980-03-01

    A new tokamak data analysis code, ZORNOC, was developed to study noncircular, high beta plasmas in the Impurity Study Experiment (ISX-B). These plasmas exhibit significant flux surface shifts and elongation in both ohmically heated and beam-heated discharges. The MHD equilibrium flux surface geometry is determined by solving the Grad-Shafranov equation based on: (1) the shape of the outermost flux surface, deduced from the magnetic loop probes; (2) a pressure profile, deduced by means of Thomson scattering data (electrons), charge exchange data (ions), and a Fokker-Planck model (fast ions); and (3) a safety factor profile, determined from the experimental data using a simple model (Z/sub eff/ = const) that is self-consistently altered while the plasma equilibrium is iterated. For beam-heated discharches the beam deposition profile is determined by means of a Monte Carlo scheme and the slowing down of the fast ions by means of an analytical solution of the Fokker-Planck equation. The code also carries out an electron power balance and calculates various confinement parameters. The code is described and examples of its operation are given.

  17. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images — an Event-Related Potential Study using an Oddball Task

    NASA Astrophysics Data System (ADS)

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-02-01

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.

  18. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images - an Event-Related Potential Study using an Oddball Task.

    PubMed

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-02-22

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the 'comfort zones' and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.

  19. Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images — an Event-Related Potential Study using an Oddball Task

    PubMed Central

    Ye, Peng; Wu, Xiang; Gao, Dingguo; Liang, Haowen; Wang, Jiahui; Deng, Shaozhi; Xu, Ningsheng; She, Juncong; Chen, Jun

    2017-01-01

    The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable. PMID:28225044

  20. Tracking objects outside the line of sight using 2D intensity images

    PubMed Central

    Klein, Jonathan; Peters, Christoph; Martín, Jaime; Laurenzis, Martin; Hullin, Matthias B.

    2016-01-01

    The observation of objects located in inaccessible regions is a recurring challenge in a wide variety of important applications. Recent work has shown that using rare and expensive optical setups, indirect diffuse light reflections can be used to reconstruct objects and two-dimensional (2D) patterns around a corner. Here we show that occluded objects can be tracked in real time using much simpler means, namely a standard 2D camera and a laser pointer. Our method fundamentally differs from previous solutions by approaching the problem in an analysis-by-synthesis sense. By repeatedly simulating light transport through the scene, we determine the set of object parameters that most closely fits the measured intensity distribution. We experimentally demonstrate that this approach is capable of following the translation of unknown objects, and translation and orientation of a known object, in real time. PMID:27577969

  1. Comparison of simultaneous and sequential two-view registration for 3D/2D registration of vascular images.

    PubMed

    Pathak, Chetna; Van Horn, Mark; Weeks, Susan; Bullitt, Elizabeth

    2005-01-01

    Accurate 3D/2D vessel registration is complicated by issues of image quality, occlusion, and other problems. This study performs a quantitative comparison of 3D/2D vessel registration in which vessels segmented from preoperative CT or MR are registered with biplane x-ray angiograms by either a) simultaneous two-view registration with advance calculation of the relative pose of the two views, or b) sequential registration with each view. We conclude on the basis of phantom studies that, even in the absence of image errors, simultaneous two-view registration is more accurate than sequential registration. In more complex settings, including clinical conditions, the relative accuracy of simultaneous two-view registration is even greater.

  2. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    PubMed

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  3. 2-D/3-D ECE imaging data for validation of turbulence simulations

    NASA Astrophysics Data System (ADS)

    Choi, Minjun; Lee, Jaehyun; Yun, Gunsu; Lee, Woochang; Park, Hyeon K.; Park, Young-Seok; Sabbagh, Steve A.; Wang, Weixing; Luhmann, Neville C., Jr.

    2015-11-01

    The 2-D/3-D KSTAR ECEI diagnostic can provide a local 2-D/3-D measurement of ECE intensity. Application of spectral analysis techniques to the ECEI data allows local estimation of frequency spectra S (f) , wavenumber spectra S (k) , wavernumber and frequency spectra S (k , f) , and bispectra b (f1 ,f2) of ECE intensity over the 2-D/3-D space, which can be used to validate turbulence simulations. However, the minimum detectable fluctuation amplitude and the maximum detectable wavenumber are limited by the temporal and spatial resolutions of the diagnostic system, respectively. Also, the finite measurement area of the diagnostic channel could introduce uncertainty in the spectra estimation. The limitations and accuracy of the ECEI estimated spectra have been tested by a synthetic ECEI diagnostic with the model and/or fluctuations calculated by GTS. Supported by the NRF of Korea under Contract No. NRF-2014M1A7A1A03029881 and NRF-2014M1A7A1A03029865 and by U.S. DOE grant DE-FG02-99ER54524.

  4. Magnetic resonance imaging of the cervical spine: comparison of 2D T2-weighted turbo spin echo, 2D T2*weighted gradient-recalled echo and 3D T2-weighted variable flip-angle turbo spin echo sequences.

    PubMed

    Meindl, T; Wirth, S; Weckbach, S; Dietrich, O; Reiser, M; Schoenberg, S O

    2009-03-01

    To compare an isotropic three-dimensional (3D) high-resolution T2-weighted (w) MR sequence and its reformations with conventional sequences for imaging of the cervical spine. Fifteen volunteers were examined at 1.5 T using sagittal and axial 3D T2-w, sagittal and axial 2D T2w, and axial 2D T2*w MR sequences. Axial reformations of the sagittal 3D dataset were generated (3D MPR T2w). Signal-to-noise and image homogeneity were evaluated in a phantom and in vivo. Visibility of ten anatomical structures of the cervical spine was evaluated. Artifacts were assessed. For statistical analysis, Cohen's kappa, Wilcoxon matched pairs, and t-testing were utilized. There were no significant differences in homogeneity between the sequences. Sagittal 3D T2w enabled better delineation of nerve roots, neural foramina, and intraforaminal structures compared to sagittal 2D T2w. Axial 3D T2w and axial 3D MPR T2w resulted in superior visibility of most anatomical structures compared to axial 2D T2w and comparable results to 2D T2*w concerning the spinal cord, nerve roots, intraforaminal structures, and fat. Artifacts were most pronounced in axial 2D T2w and axial 3D T2w. Acquisition of a 3D T2w data set is feasible in the cervical spine with superior delineation of anatomical structures compared to 2D sequences.

  5. 2D wavelet-analysis-based calibration technique for flat-panel imaging detectors: application in cone beam volume CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Ning, Ruola; Yu, Rongfeng; Conover, David L.

    1999-05-01

    The application of the newly developed flat panel x-ray imaging detector in cone beam volume CT has attracted increasing interest recently. Due to an imperfect solid state array manufacturing process, however, defective elements, gain non-uniformity and offset image unavoidably exist in all kinds of flat panel x-ray imaging detectors, which will cause severe streak and ring artifacts in a cone beam reconstruction image and severely degrade image quality. A calibration technique, in which the artifacts resulting from the defective elements, gain non-uniformity and offset image can be reduced significantly, is presented in this paper. The detection of defective elements is distinctively based upon two-dimensional (2D) wavelet analysis. Because of its inherent localizability in recognizing singularities or discontinuities, wavelet analysis possesses the capability of detecting defective elements over a rather large x-ray exposure range, e.g., 20% to approximately 60% of the dynamic range of the detector used. Three-dimensional (3D) images of a low-contrast CT phantom have been reconstructed from projection images acquired by a flat panel x-ray imaging detector with and without calibration process applied. The artifacts caused individually by defective elements, gain non-uniformity and offset image have been separated and investigated in detail, and the correlation with each other have also been exposed explicitly. The investigation is enforced by quantitative analysis of the signal to noise ratio (SNR) and the image uniformity of the cone beam reconstruction image. It has been demonstrated that the ring and streak artifacts resulting from the imperfect performance of a flat panel x-ray imaging detector can be reduced dramatically, and then the image qualities of a cone beam reconstruction image, such as contrast resolution and image uniformity are improved significantly. Furthermore, with little modification, the calibration technique presented here is also applicable

  6. Quantification and geometric analysis of coiling patterns in gastropod shells based on 3D and 2D image data.

    PubMed

    Noshita, Koji

    2014-12-21

    The morphology of gastropod shells has been a focus of analyses in ecology and evolution. It has recently emerged as an important issue in developmental biology, thanks to recent advancements in molecular biological techniques. The growing tube model is a theoretical morphological model for describing various coiling patterns of molluscan shells, and it is a useful theoretical tool to relate local tissue growth with global shell morphology. However, the growing tube model has rarely been adopted in empirical research owing to the difficulty in estimating the parameters of the model from morphological data. In this article, I solve this problem by developing methods of parameter estimation when (1) 3D Computed Tomography (CT) data are available and (2) only 2D image data (such as photographs) are available. When 3D CT data are available, the parameters can be estimated by fitting an analytical solution of the growing tube model to the data. When only 2D image data are available, we first fit Raup׳s model to the 2D image data and then convert the parameters of Raup׳s model to those of the growing tube model. To illustrate the use of these methods, I apply them to data generated by a computer simulation of the model. Both methods work well, except when shells grow without coiling. I also demonstrate the effectiveness of the methods by applying the model to actual 3D CT data and 2D image data of land snails. I conclude that the method proposed in this article can reconstruct the coiling pattern from observed data.

  7. Two Fibonacci P-code based image scrambling algorithms

    NASA Astrophysics Data System (ADS)

    Zhou, Yicong; Agaian, Sos; Joyner, Valencia M.; Panetta, Karen

    2008-02-01

    Image scrambling is used to make images visually unrecognizable such that unauthorized users have difficulty decoding the scrambled image to access the original image. This article presents two new image scrambling algorithms based on Fibonacci p-code, a parametric sequence. The first algorithm works in spatial domain and the second in frequency domain (including JPEG domain). A parameter, p, is used as a security-key and has many possible choices to guarantee the high security of the scrambled images. The presented algorithms can be implemented for encoding/decoding both in full and partial image scrambling, and can be used in real-time applications, such as image data hiding and encryption. Examples of image scrambling are provided. Computer simulations are shown to demonstrate that the presented methods also have good performance in common image attacks such as cutting (data loss), compression and noise. The new scrambling methods can be implemented on grey level images and 3-color components in color images. A new Lucas p-code is also introduced. The scrambling images based on Fibonacci p-code are also compared to the scrambling results of classic Fibonacci number and Lucas p-code. This will demonstrate that the classical Fibonacci number is a special sequence of Fibonacci p-code and show the different scrambling results of Fibonacci p-code and Lucas p-code.

  8. Adaptive clutter filter in 2-D color flow imaging based on in vivo I/Q signal.

    PubMed

    Zhou, Xiaoming; Zhang, Congyao; Liu, Dong C

    2014-01-01

    Color flow imaging has been well applied in clinical diagnosis. For the high quality color flow images, clutter filter is important to separate the Doppler signals from blood and tissue. Traditional clutter filters, such as finite impulse response, infinite impulse response and regression filters, were applied, which are based on the hypothesis that the clutter signal is stationary or tissue moves slowly. However, in realistic clinic color flow imaging, the signals are non-stationary signals because of accelerated moving tissue. For most related papers, simulated RF signals are widely used without in vivo I/Q signal. Hence, in this paper, adaptive polynomial regression filter, which is down mixing with instantaneous clutter frequency, was proposed based on in vivo carotid I/Q signal in realistic color flow imaging. To get the best performance, the optimal polynomial order of polynomial regression filter and the optimal polynomial order for estimation of instantaneous clutter frequency respectively were confirmed. Finally, compared with the mean blood velocity and quality of 2-D color flow image, the experiment results show that adaptive polynomial regression filter, which is down mixing with instantaneous clutter frequency, can significantly enhance the mean blood velocity and get high quality 2-D color flow image.

  9. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  10. A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment

    NASA Astrophysics Data System (ADS)

    Casteel, Curtis H., Jr.; Gorham, LeRoy A.; Minardi, Michael J.; Scarborough, Steven M.; Naidu, Kiranmai D.; Majumder, Uttam K.

    2007-04-01

    This paper describes a challenge problem whose scope is the 2D/3D imaging of stationary targets from a volumetric data set of X-band Synthetic Aperture Radar (SAR) data collected in an urban environment. The data for this problem was collected at a scene consisting of numerous civilian vehicles and calibration targets. The radar operated in circular SAR mode and completed 8 circular flight paths around the scene with varying altitudes. Data consists of phase history data, auxiliary data, processing algorithms, processed images, as well as ground truth data. Interest is focused on mitigating the large side lobes in the point spread function. Due to the sparse nature of the elevation aperture, traditional imaging techniques introduce excessive artifacts in the processed images. Further interests include the formation of highresolution 3D SAR images with single pass data and feature extraction for 3D SAR automatic target recognition applications. The purpose of releasing the Gotcha Volumetric SAR Data Set is to provide the community with X-band SAR data that supports the development of new algorithms for high-resolution 2D/3D imaging.

  11. Adaptive zero-tree structure for curved wavelet image coding

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Demin; Vincent, André

    2006-02-01

    We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.

  12. PynPoint code for exoplanet imaging

    NASA Astrophysics Data System (ADS)

    Amara, A.; Quanz, S. P.; Akeret, J.

    2015-04-01

    We announce the public release of PynPoint, a Python package that we have developed for analysing exoplanet data taken with the angular differential imaging observing technique. In particular, PynPoint is designed to model the point spread function of the central star and to subtract its flux contribution to reveal nearby faint companion planets. The current version of the package does this correction by using a principal component analysis method to build a basis set for modelling the point spread function of the observations. We demonstrate the performance of the package by reanalysing publicly available data on the exoplanet β Pictoris b, which consists of close to 24,000 individual image frames. We show that PynPoint is able to analyse this typical data in roughly 1.5 min on a Mac Pro, when the number of images is reduced by co-adding in sets of 5. The main computational work, the calculation of the Singular-Value-Decomposition, parallelises well as a result of a reliance on the SciPy and NumPy packages. For this calculation the peak memory load is 6 GB, which can be run comfortably on most workstations. A simpler calculation, by co-adding over 50, takes 3 s with a peak memory usage of 600 MB. This can be performed easily on a laptop. In developing the package we have modularised the code so that we will be able to extend functionality in future releases, through the inclusion of more modules, without it affecting the users application programming interface. We distribute the PynPoint package under GPLv3 licence through the central PyPI server, and the documentation is available online (http://pynpoint.ethz.ch).

  13. Calibration model of a dual gain flat panel detector for 2D and 3D x-ray imaging

    SciTech Connect

    Schmidgunst, C.; Ritter, D.; Lang, E.

    2007-09-15

    The continuing research and further development in flat panel detector technology have led to its integration into more and more medical x-ray systems for two-dimensional (2D) and three-dimensional (3D) imaging, such as fixed or mobile C arms. Besides the obvious advantages of flat panel detectors, like the slim design and the resulting optimum accessibility to the patient, their success is primarily a product of the image quality that can be achieved. The benefits in the physical and performance-related features as opposed to conventional image intensifier systems (e.g., distortion-free reproduction of imaging information or almost linear signal response over a large dynamic range) can be fully exploited, however, only if the raw detector images are correctly calibrated and postprocessed. Previous procedures for processing raw data contain idealizations that, in the real world, lead to artifacts or losses in image quality. Thus, for example, temperature dependencies or changes in beam geometry, as can occur with mobile C arm systems, have not been taken into account up to this time. Additionally, adverse characteristics such as image lag or aging effects have to be compensated to attain the best possible image quality. In this article a procedure is presented that takes into account the important dependencies of the individual pixel sensitivity of flat panel detectors used in 2D or 3D imaging and simultaneously minimizes the work required for an extensive recalibration. It is suitable for conventional detectors with only one gain mode as well as for the detectors specially developed for 3D imaging with dual gain read-out technology.

  14. Calibration model of a dual gain flat panel detector for 2D and 3D x-ray imaging.

    PubMed

    Schmidgunst, C; Ritter, D; Lang, E

    2007-09-01

    The continuing research and further development in flat panel detector technology have led to its integration into more and more medical x-ray systems for two-dimensional (2D) and three-dimensional (3D) imaging, such as fixed or mobile C arms. Besides the obvious advantages of flat panel detectors, like the slim design and the resulting optimum accessibility to the patient, their success is primarily a product of the image quality that can be achieved. The benefits in the physical and performance-related features as opposed to conventional image intensifier systems, (e.g., distortion-free reproduction of imaging information or almost linear signal response over a large dynamic range) can be fully exploited, however, only if the raw detector images are correctly calibrated and postprocessed. Previous procedures for processing raw data contain idealizations that, in the real world, lead to artifacts or losses in image quality. Thus, for example, temperature dependencies or changes in beam geometry, as can occur with mobile C arm systems, have not been taken into account up to this time. Additionally, adverse characteristics such as image lag or aging effects have to be compensated to attain the best possible image quality. In this article a procedure is presented that takes into account the important dependencies of the individual pixel sensitivity of flat panel detectors used in 2D or 3D imaging and simultaneously minimizes the work required for an extensive recalibration. It is suitable for conventional detectors with only one gain mode as well as for the detectors specially developed for 3D imaging with dual gain read-out technology.

  15. Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform

    NASA Astrophysics Data System (ADS)

    Liu, Bao-Lei; Yang, Zhao-Hua; Liu, Xia; Wu, Ling-An

    2017-02-01

    We propose and demonstrate a computational imaging technique that uses structured illumination based on a two-dimensional discrete cosine transform to perform imaging with a single-pixel detector. A scene is illuminated by a projector with two sets of orthogonal patterns, then by applying an inverse cosine transform to the spectra obtained from the single-pixel detector a full-color image is retrieved. This technique can retrieve an image from sub-Nyquist measurements, and the background noise is easily canceled to give excellent image quality. Moreover, the experimental setup is very simple.

  16. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  17. A New Cell-Centered Implicit Numerical Scheme for Ions in the 2-D Axisymmetric Code Hall2de

    NASA Technical Reports Server (NTRS)

    Lopez Ortega, Alejandro; Mikellides, Ioannis G.

    2014-01-01

    We present a new algorithm in the Hall2De code to simulate the ion hydrodynamics in the acceleration channel and near plume regions of Hall-effect thrusters. This implementation constitutes an upgrade of the capabilities built in the Hall2De code. The equations of mass conservation and momentum for unmagnetized ions are solved using a conservative, finite-volume, cell-centered scheme on a magnetic-field-aligned grid. Major computational savings are achieved by making use of an implicit predictor/multi-corrector algorithm for time evolution. Inaccuracies in the prediction of the motion of low-energy ions in the near plume in hydrodynamics approaches are addressed by implementing a multi-fluid algorithm that tracks ions of different energies separately. A wide range of comparisons with measurements are performed to validate the new ion algorithms. Several numerical experiments with the location and value of the anomalous collision frequency are also presented. Differences in the plasma properties in the near-plume between the single fluid and multi-fluid approaches are discussed. We complete our validation by comparing predicted erosion rates at the channel walls of the thruster with measurements. Erosion rates predicted by the plasma properties obtained from simulations replicate accurately measured rates of erosion within the uncertainty range of the sputtering models employed.

  18. The FlatModel: a 2D numerical code to evaluate debris flow dynamics. Eastern Pyrenees basins application.

    NASA Astrophysics Data System (ADS)

    Bateman, A.; Medina, V.; Hürlimann, M.

    2009-04-01

    Debris flows are present in every country where a combination of high mountains and flash floods exists. In the northern part of the Iberian Peninsula, at the Pyrenees, sporadic debris events occur. We selected two different events. The first one was triggered at La Guingueta by the big exceptional flood event that produced many debris flows in 1982 which were spread all over the Catalonian Pyrenees. The second, more local event occurred in 2000 at the mountain Montserrat at the Pre-litoral mountain chain. We present here some results of the FLATModel, entirely developed at the Research Group in Sediment Transport of the Hydraulic, Marine and Environmental Engineering Department (GITS-UPC). The 2D FLATModel is a Finite Volume method that uses the Godunov scheme. Some numerical arranges have been made to analyze the entrainment process during the events, the Stop & Go phenomena and the final deposit of the material. The material rheology implemented is the Voellmy approach, because it acts very well evaluating the frictional and turbulent behavior. The FLATModel uses a GIS environment that facilitates the data analysis as the comparison between field and numerical data. The two events present two different characteristics, one is practically a one dimensional problem of 1400 m in length and the other has a more two dimensional behavior that forms a big fan.

  19. Fast ion induced shearing of 2D Alfvén eigenmodes measured by electron cyclotron emission imaging.

    PubMed

    Tobias, B J; Classen, I G J; Domier, C W; Heidbrink, W W; Luhmann, N C; Nazikian, R; Park, H K; Spong, D A; Van Zeeland, M A

    2011-02-18

    Two-dimensional images of electron temperature perturbations are obtained with electron cyclotron emission imaging (ECEI) on the DIII-D tokamak and compared to Alfvén eigenmode structures obtained by numerical modeling using both ideal MHD and hybrid MHD-gyrofluid codes. While many features of the observations are found to be in excellent agreement with simulations using an ideal MHD code (NOVA), other characteristics distinctly reveal the influence of fast ions on the mode structures. These features are found to be well described by the nonperturbative hybrid MHD-gyrofluid model TAEFL.

  20. Fast Ion Induced Shearing of 2D Alfvén Eigenmodes Measured by Electron Cyclotron Emission Imaging

    NASA Astrophysics Data System (ADS)

    Tobias, B. J.; Classen, I. G. J.; Domier, C. W.; Heidbrink, W. W.; Luhmann, N. C., Jr.; Nazikian, R.; Park, H. K.; Spong, D. A.; van Zeeland, M. A.

    2011-02-01

    Two-dimensional images of electron temperature perturbations are obtained with electron cyclotron emission imaging (ECEI) on the DIII-D tokamak and compared to Alfvén eigenmode structures obtained by numerical modeling using both ideal MHD and hybrid MHD-gyrofluid codes. While many features of the observations are found to be in excellent agreement with simulations using an ideal MHD code (NOVA), other characteristics distinctly reveal the influence of fast ions on the mode structures. These features are found to be well described by the nonperturbative hybrid MHD-gyrofluid model TAEFL.

  1. Assessment of the effects of scrape-off layer fluctuations on first wall sputtering with the TOKAM-2D turbulence code

    NASA Astrophysics Data System (ADS)

    Marandet, Y.; Nace, N.; Valentinuzzi, M.; Tamain, P.; Bufferand, H.; Ciraolo, G.; Genesio, P.; Mellet, N.

    2016-11-01

    Plasma material interactions on the first wall of future tokamaks such as ITER and DEMO are likely to play an important role, because of turbulent radial transport. The latter results to a large extent from the radial propagation of plasma filaments through a tenuous background. In such a situation, mean field descriptions (on which transport codes rely) become questionable. First wall sputtering is of particular interest, especially in a full W machine, since it has been shown experimentally that first wall sources control core contamination. In ITER, beryllium sources will be one of the important actors in determining the fuel retention level through codeposition. In this work, we study the effect of turbulent fluctuations on mean sputtering yields and fluxes, relying on a new version of the TOKAM-2D code which includes ion temperature fluctuations. We show that fluctuations enhance sputtering at sub-threshold impact energies, by more than an order of magnitude when fluctuation levels are of order unity.

  2. Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images

    PubMed Central

    Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.

    2012-01-01

    SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773

  3. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  4. Diagnostic algorithm: how to make use of new 2D, 3D and 4D ultrasound technologies in breast imaging.

    PubMed

    Weismann, C F; Datz, L

    2007-11-01

    The aim of this publication is to present a time saving diagnostic algorithm consisting of two-dimensional (2D), three-dimensional (3D) and four-dimensional (4D) ultrasound (US) technologies. This algorithm of eight steps combines different imaging modalities and render modes which allow a step by step analysis of 2D, 3D and 4D diagnostic criteria. Advanced breast US systems with broadband high frequency linear transducers, full digital data management and high resolution are the actual basis for two-dimensional breast US studies in order to detect early breast cancer (step 1). The continuous developments of 2D US technologies including contrast resolution imaging (CRI) and speckle reduction imaging (SRI) have a direct influence on the high quality of three-dimensional and four-dimensional presentation of anatomical breast structures and pathological details. The diagnostic options provided by static 3D volume datasets according to US BI-RADS analogue assessment, concerning lesion shape, orientation, margin, echogenic rim sign, lesion echogenicity, acoustic transmission, associated calcifications, 3D criteria of the coronal plane, surrounding tissue composition (step 2) and lesion vascularity (step 6) are discussed. Static 3D datasets offer the combination of long axes distance measurements and volume calculations, which are the basis for an accurate follow-up in BI-RADS II and BI-RADS III lesions (step 3). Real time 4D volume contrast imaging (VCI) is able to demonstrate tissue elasticity (step 5). Glass body rendering is a static 3D tool which presents greyscale and colour information to study the vascularity and the vascular architecture of a lesion (step 6). Tomographic ultrasound imaging (TUI) is used for a slice by slice documentation in different investigation planes (A-,B- or C-plane) (steps 4 and 7). The final step 8 uses the panoramic view technique (XTD-View) to document the localisation within the breast and to make the position of a lesion simply

  5. Multi-imaging capabilities of a 2D diffraction grating in combination with digital holography.

    PubMed

    Paturzo, Melania; Merola, Francesco; Ferraro, Pietro

    2010-04-01

    In this Letter we report on an alternative approach to get multiple images in microscopy, exploiting the capabilities of both a lithium niobate diffraction grating and digital holographic technique. We demonstrate that multi-imaging can be achieved in a lensless configuration by using a hexagonal diffraction grating but overcoming, thanks to digital holography (DH), the many constrains imposed by the grating parameters in multi-imaging with Talbot effect or Talbot array illuminators. In fact, DH permits the numerical reconstruction of the optical field diffracted by the grating, thus obtaining in-focus multiple images in a plane different from the fractional or entire Talbot ones.

  6. Characterization of controlled bone defects using 2D and 3D ultrasound imaging techniques.

    PubMed

    Parmar, Biren J; Longsine, Whitney; Sabonghy, Eric P; Han, Arum; Tasciotti, Ennio; Weiner, Bradley K; Ferrari, Mauro; Righetti, Raffaella

    2010-08-21

    Ultrasound is emerging as an attractive alternative modality to standard x-ray and CT methods for bone assessment applications. As of today, however, there is a lack of systematic studies that investigate the performance of diagnostic ultrasound techniques in bone imaging applications. This study aims at understanding the performance limitations of new ultrasound techniques for imaging bones in controlled experiments in vitro. Experiments are performed on samples of mammalian and non-mammalian bones with controlled defects with size ranging from 400 microm to 5 mm. Ultrasound findings are statistically compared with those obtained from the same samples using standard x-ray imaging modalities and optical microscopy. The results of this study demonstrate that it is feasible to use diagnostic ultrasound imaging techniques to assess sub-millimeter bone defects in real time and with high accuracy and precision. These results also demonstrate that ultrasound imaging techniques perform comparably better than x-ray imaging and optical imaging methods, in the assessment of a wide range of controlled defects both in mammalian and non-mammalian bones. In the future, ultrasound imaging techniques might provide a cost-effective, real-time, safe and portable diagnostic tool for bone imaging applications.

  7. Soft-tissues Image Processing: Comparison of Traditional Segmentation Methods with 2D active Contour Methods

    NASA Astrophysics Data System (ADS)

    Mikulka, J.; Gescheidtova, E.; Bartusek, K.

    2012-01-01

    The paper deals with modern methods of image processing, especially image segmentation, classification and evaluation of parameters. It focuses primarily on processing medical images of soft tissues obtained by magnetic resonance tomography (MR). It is easy to describe edges of the sought objects using segmented images. The edges found can be useful for further processing of monitored object such as calculating the perimeter, surface and volume evaluation or even three-dimensional shape reconstruction. The proposed solutions can be used for the classification of healthy/unhealthy tissues in MR or other imaging. Application examples of the proposed segmentation methods are shown. Research in the area of image segmentation focuses on methods based on solving partial differential equations. This is a modern method for image processing, often called the active contour method. It is of great advantage in the segmentation of real images degraded by noise with fuzzy edges and transitions between objects. In the paper, results of the segmentation of medical images by the active contour method are compared with results of the segmentation by other existing methods. Experimental applications which demonstrate the very good properties of the active contour method are given.

  8. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  9. Infrared imaging of 2-D temperature distribution during cryogen spray cooling.

    PubMed

    Choi, Bernard; Welch, Ashley J

    2002-12-01

    Cryogen spray cooling (CSC) is used in conjunction with pulsed laser irradiation for treatment of dermatologic indications. The main goal of this study was to determine the radial temperature distribution created by CSC and evaluate the importance of radial temperature gradients upon the subsequent analysis of tissue cooling throughout the skin. Since direct measurement of surface temperatures during CSC are hindered by the formation of a liquid cryogen layer, temperature distributions were estimated using a thin, black aluminum sheet. An infrared focal plane array camera was used to determine the 2-D backside temperature distribution during a cryogen spurt, which preliminary measurements have shown is a good indicator of the front-side temperature distribution. The measured temperature distribution was approximately gaussian in shape. Next, the transient temperature distributions in skin were calculated for two cases: 1) the standard 1-D solution which assumes a uniform cooling temperature distribution, and 2) a 2-D solution using a nonuniform surface cooling temperature distribution based upon the back-side infrared temperature measurements. At the end of a 100-ms cryogen spurt, calculations showed that, for the two cases, large discrepancies in temperatures at the surface and at a 60-micron depth were found at radii greater than 2.5 mm. These results suggest that it is necessary to consider radial temperature gradients during cryogen spray cooling of tissue.

  10. Nonrigid Registration of 2-D and 3-D Dynamic Cell Nuclei Images for Improved Classification of Subcellular Particle Motion

    PubMed Central

    Kim, Il-Han; Chen, Yi-Chun M.; Spector, David L.; Eils, Roland; Rohr, Karl

    2012-01-01

    The observed motion of subcellular particles in fluorescence microscopy image sequences of live cells is generally a superposition of the motion and deformation of the cell and the motion of the particles. Decoupling the two types of movements to enable accurate classification of the particle motion requires the application of registration algorithms. We have developed an intensity-based approach for nonrigid registration of multi-channel microscopy image sequences of cell nuclei. First, based on 3-D synthetic images we demonstrate that cell nucleus deformations change the observed motion types of particles and that our approach allows to recover the original motion. Second, we have successfully applied our approach to register 2-D and 3-D real microscopy image sequences. A quantitative experimental comparison with previous approaches for nonrigid registration of cell microscopy has also been performed. PMID:20840894

  11. Recent advances in CZT strip detectors and coded mask imagers

    NASA Astrophysics Data System (ADS)

    Matteson, J. L.; Gruber, D. E.; Heindl, W. A.; Pelling, M. R.; Peterson, L. E.; Rothschild, R. E.; Skelton, R. T.; Hink, P. L.; Slavis, K. R.; Binns, W. R.; Tumer, T.; Visser, G.

    1999-09-01

    The UCSD, WU, UCR and Nova collaboration has made significant progress on the necessary techniques for coded mask imaging of gamma-ray bursts: position sensitive CZT detectors with good energy resolution, ASIC readout, coded mask imaging, and background properties at balloon altitudes. Results on coded mask imaging techniques appropriate for wide field imaging and localization of gamma-ray bursts are presented, including a shadowgram and deconvolved image taken with a prototype detector/ASIC and MURA mask. This research was supported by NASA Grants NAG5-5111, NAG5-5114, and NGT5-50170.

  12. 2D and 3D imaging resolution trade-offs in quantifying pore throats for prediction of permeability

    SciTech Connect

    Beckingham, Lauren E.; Peters, Catherine A.; Um, Wooyong; Jones, Keith W.; Lindquist, W.Brent

    2013-09-03

    Although the impact of subsurface geochemical reactions on porosity is relatively well understood, changes in permeability remain difficult to estimate. In this work, pore-network modeling was used to predict permeability based on pore- and pore-throat size distributions determined from analysis of 2D scanning electron microscopy (SEM) images of thin sections and 3D X-ray computed microtomography (CMT) data. The analyzed specimens were a Viking sandstone sample from the Alberta sedimentary basin and an experimental column of reacted Hanford sediments. For the column, a decrease in permeability due to mineral precipitation was estimated, but the permeability estimates were dependent on imaging technique and resolution. X-ray CT imaging has the advantage of reconstructing a 3D pore network while 2D SEM imaging can easily analyze sub-grain and intragranular variations in mineralogy. Pore network models informed by analyses of 2D and 3D images at comparable resolutions produced permeability esti- mates with relatively good agreement. Large discrepancies in predicted permeabilities resulted from small variations in image resolution. Images with resolutions 0.4 to 4 lm predicted permeabilities differ- ing by orders of magnitude. While lower-resolution scans can analyze larger specimens, small pore throats may be missed due to resolution limitations, which in turn overestimates permeability in a pore-network model in which pore-to-pore conductances are statistically assigned. Conversely, high-res- olution scans are capable of capturing small pore throats, but if they are not actually flow-conducting predicted permeabilities will be below expected values. In addition, permeability is underestimated due to misinterpreting surface-roughness features as small pore throats. Comparison of permeability pre- dictions with expected and measured permeability values showed that the largest discrepancies resulted from the highest resolution images and the best predictions of

  13. Enhanced 2D-image upconversion using solid-state lasers.

    PubMed

    Pedersen, Christian; Karamehmedović, Emir; Dam, Jeppe Seidelin; Tidemand-Lichtenberg, Peter

    2009-11-09

    Based on enhanced upconversion, we demonstrate a highly efficient method for converting a full image from one part of the electromagnetic spectrum into a new desired wavelength region. By illuminating a metal transmission mask with a 765 nm Gaussian beam to create an image and subsequently focusing the image inside a nonlinear PPKTP crystal located in the high intra-cavity field of a 1342 nm solid-state Nd:YVO(4) laser, an upconverted image at 488 nm is generated. We have experimentally achieved an upconversion efficiency of 40% under CW conditions. The proposed technique can be further adapted for high efficiency mid-infrared image upconversion where direct and fast detection is difficult or impossible to perform with existing detector technologies.

  14. Experimental validation of 2D uncertainty quantification for digital image correlation.

    SciTech Connect

    Reu, Phillip L.

    2010-03-01

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual test images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.

  15. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  16. Visualizing 3D objects from 2D cross sectional images displayed in-situ versus ex-situ.

    PubMed

    Wu, Bing; Klatzky, Roberta L; Stetten, George

    2010-03-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.

  17. Known-Component 3D-2D Registration for Image Guidance and Quality Assurance in Spine Surgery Pedicle Screw Placement

    PubMed Central

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-01-01

    Purpose To extend the functionality of radiographic/fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREϕ) from planned trajectory. Results Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx <2 mm and TREϕ<0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx <1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. Conclusions 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries. PMID:26028805

  18. Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-03-01

    Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory. Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx<2 mm and TREΦ <0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx<1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. Conclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.

  19. Preparation of 2D sequences of corneal images for 3D model building.

    PubMed

    Elbita, Abdulhakim; Qahwaji, Rami; Ipson, Stanley; Sharif, Mhd Saeed; Ghanchi, Faruque

    2014-04-01

    A confocal microscope provides a sequence of images, at incremental depths, of the various corneal layers and structures. From these, medical practioners can extract clinical information on the state of health of the patient's cornea. In this work we are addressing problems associated with capturing and processing these images including blurring, non-uniform illumination and noise, as well as the displacement of images laterally and in the anterior-posterior direction caused by subject movement. The latter may cause some of the captured images to be out of sequence in terms of depth. In this paper we introduce automated algorithms for classification, reordering, registration and segmentation to solve these problems. The successful implementation of these algorithms could open the door for another interesting development, which is the 3D modelling of these sequences.

  20. Integrated circuits for volumetric ultrasound imaging with 2-D CMUT arrays.

    PubMed

    Bhuyan, Anshuman; Choe, Jung Woo; Lee, Byung Chul; Wygant, Ira O; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T

    2013-12-01

    Real-time volumetric ultrasound imaging systems require transmit and receive circuitry to generate ultrasound beams and process received echo signals. The complexity of building such a system is high due to requirement of the front-end electronics needing to be very close to the transducer. A large number of elements also need to be interfaced to the back-end system and image processing of a large dataset could affect the imaging volume rate. In this work, we present a 3-D imaging system using capacitive micromachined ultrasonic transducer (CMUT) technology that addresses many of the challenges in building such a system. We demonstrate two approaches in integrating the transducer and the front-end electronics. The transducer is a 5-MHz CMUT array with an 8 mm × 8 mm aperture size. The aperture consists of 1024 elements (32 × 32) with an element pitch of 250 μm. An integrated circuit (IC) consists of a transmit beamformer and receive circuitry to improve the noise performance of the overall system. The assembly was interfaced with an FPGA and a back-end system (comprising of a data acquisition system and PC). The FPGA provided the digital I/O signals for the IC and the back-end system was used to process the received RF echo data (from the IC) and reconstruct the volume image using a phased array imaging approach. Imaging experiments were performed using wire and spring targets, a ventricle model and a human prostrate. Real-time volumetric images were captured at 5 volumes per second and are presented in this paper.

  1. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  2. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  3. Applying a 2D based CAD scheme for detecting micro-calcification clusters using digital breast tomosynthesis images: an assessment

    NASA Astrophysics Data System (ADS)

    Park, Sang Cheol; Zheng, Bin; Wang, Xiao-Hui; Gur, David

    2008-03-01

    Digital breast tomosynthesis (DBT) has emerged as a promising imaging modality for screening mammography. However, visually detecting micro-calcification clusters depicted on DBT images is a difficult task. Computer-aided detection (CAD) schemes for detecting micro-calcification clusters depicted on mammograms can achieve high performance and the use of CAD results can assist radiologists in detecting subtle micro-calcification clusters. In this study, we compared the performance of an available 2D based CAD scheme with one that includes a new grouping and scoring method when applied to both projection and reconstructed DBT images. We selected a dataset involving 96 DBT examinations acquired on 45 women. Each DBT image set included 11 low dose projection images and a varying number of reconstructed image slices ranging from 18 to 87. In this dataset 20 true-positive micro-calcification clusters were visually detected on the projection images and 40 were visually detected on the reconstructed images, respectively. We first applied the CAD scheme that was previously developed in our laboratory to the DBT dataset. We then tested a new grouping method that defines an independent cluster by grouping the same cluster detected on different projection or reconstructed images. We then compared four scoring methods to assess the CAD performance. The maximum sensitivity level observed for the different grouping and scoring methods were 70% and 88% for the projection and reconstructed images with a maximum false-positive rate of 4.0 and 15.9 per examination, respectively. This preliminary study demonstrates that (1) among the maximum, the minimum or the average CAD generated scores, using the maximum score of the grouped cluster regions achieved the highest performance level, (2) the histogram based scoring method is reasonably effective in reducing false-positive detections on the projection images but the overall CAD sensitivity is lower due to lower signal-to-noise ratio

  4. Coded excitation for diverging wave cardiac imaging: a feasibility study

    NASA Astrophysics Data System (ADS)

    Zhao, Feifei; Tong, Ling; He, Qiong; Luo, Jianwen

    2017-02-01

    Diverging wave (DW) based cardiac imaging has gained increasing interest in recent years given its capacity to achieve ultrahigh frame rate. However, the signal-to-noise ratio (SNR), contrast, and penetration depth of the resulting B-mode images are typically low as DWs spread energy over a large region. Coded excitation is known to be capable of increasing the SNR and penetration for ultrasound imaging. The aim of this study was therefore to test the feasibility of applying coded excitation in DW imaging to improve the corresponding SNR, contrast and penetration depth. To this end, two types of codes, i.e. a linear frequency modulated chirp code and a set of complementary Golay codes were tested in three different DW imaging schemes, i.e. 1 angle DW transmit without compounding, 3 and 5 angles DW transmits with coherent compounding. The performances (SNR, contrast ratio (CR), contrast-to-noise ratio (CNR), and penetration) of different imaging schemes were investigated by means of simulations and in vitro experiments. As for benchmark, corresponding DW imaging schemes with regular pulsed excitation as well as the conventional focused imaging scheme were also included. The results showed that the SNR was improved by about 10 dB using coded excitation while the penetration depth was increased by 2.5 cm and 1.8 cm using chirp code and Golay codes, respectively. The CNR and CR gains varied with the depth for different DW schemes using coded excitations. Specifically, for non-compounded DW imaging schemes, the gain in the CR was about 5 dB and 3 dB while the gain in the CNR was about 4.5 dB and 3.5 dB at larger depths using chirp code and Golay codes, respectively. For compounded imaging schemes, using coded excitation, the gain in the penetration and contrast were relatively smaller compared to non-compounded ones. Overall, these findings indicated the feasibility of coded excitation in improving the image quality of DW imaging. Preliminary in vivo cardiac images

  5. Tracking contrast agents using real-time 2D photoacoustic imaging system for cardiac applications

    NASA Astrophysics Data System (ADS)

    Olafsson, Ragnar; Montilla, Leonardo; Ingram, Pier; Witte, Russell S.

    2009-02-01

    Photoacoustic (PA) imaging is a rapidly developing imaging modality that can detect optical contrast agents with high sensitivity. While detectors in PA imaging have traditionally been single element ultrasound transducers, use of array systems is desirable because they potentially provide high frame rates to capture dynamic events, such as injection and distribution of contrast in clinical applications. We present preliminary data consisting of 40 second sequences of coregistered pulse-echo (PE) and PA images acquired simultaneously in real time using a clinical ultrasonic machine. Using a 7 MHz linear array, the scanner allowed simultaneous acquisition of inphase-quadrature (IQ) data on 64 elements at a rate limited by the illumination source (Q-switched laser at 20 Hz) with spatial resolution determined to be 0.6 mm (axial) and 0.4 mm (lateral). PA images had a signal-to-noise ratio of approximately 35 dB without averaging. The sequences captured the injection and distribution of an infrared-absorbing contrast agent into a cadaver rat heart. From these data, a perfusion time constant of 0.23 s-1 was estimated. After further refinement, the system will be tested in live animals. Ultimately, an integrated system in the clinic could facilitate inexpensive molecular screening for coronary artery disease.

  6. Coded excitation plane wave imaging for shear wave motion detection.

    PubMed

    Song, Pengfei; Urban, Matthew W; Manduca, Armando; Greenleaf, James F; Chen, Shigao

    2015-07-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave SNR compared with conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2 to 4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (body mass index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue.

  7. 2D and 3D Refraction Based X-ray Imaging Suitable for Clinical and Pathological Diagnosis

    SciTech Connect

    Ando, Masami; Bando, Hiroko; Ueno, Ei

    2007-01-19

    The first observation of micro papillary (MP) breast cancer by x-ray dark-field imaging (XDFI) and the first observation of the 3D x-ray internal structure of another breast cancer, ductal carcinoma in-situ (DCIS), are reported. The specimen size for the sheet-shaped MP was 26 mm x 22 mm x 2.8 mm, and that for the rod-shaped DCIS was 3.6 mm in diameter and 4.7 mm in height. The experiment was performed at the Photon Factory, KEK: High Energy Accelerator Research Organization. We achieved a high-contrast x-ray image by adopting a thickness-controlled transmission-type angular analyzer that allows only refraction components from the object for 2D imaging. This provides a high-contrast image of cancer-cell nests, cancer cells and stroma. For x-ray 3D imaging, a new algorithm due to the refraction for x-ray CT was created. The angular information was acquired by x-ray optics diffraction-enhanced imaging (DEI). The number of data was 900 for each reconstruction. A reconstructed CT image may include ductus lactiferi, micro calcification and the breast gland. This modality has the possibility to open up a new clinical and pathological diagnosis using x-ray, offering more precise inspection and detection of early signs of breast cancer.

  8. Magnetic resonance image reconstruction using trained geometric directions in 2D redundant wavelets domain and non-convex optimization.

    PubMed

    Ning, Bende; Qu, Xiaobo; Guo, Di; Hu, Changwei; Chen, Zhong

    2013-11-01

    Reducing scanning time is significantly important for MRI. Compressed sensing has shown promising results by undersampling the k-space data to speed up imaging. Sparsity of an image plays an important role in compressed sensing MRI to reduce the image artifacts. Recently, the method of patch-based directional wavelets (PBDW) which trains geometric directions from undersampled data has been proposed. It has better performance in preserving image edges than conventional sparsifying transforms. However, obvious artifacts are presented in the smooth region when the data are highly undersampled. In addition, the original PBDW-based method does not hold obvious improvement for radial and fully 2D random sampling patterns. In this paper, the PBDW-based MRI reconstruction is improved from two aspects: 1) An efficient non-convex minimization algorithm is modified to enhance image quality; 2) PBDW are extended into shift-invariant discrete wavelet domain to enhance the ability of transform on sparsifying piecewise smooth image features. Numerical simulation results on vivo magnetic resonance images demonstrate that the proposed method outperforms the original PBDW in terms of removing artifacts and preserving edges.

  9. 3D information from 2D images recorded in the European Modular Cultivation System on the ISS

    NASA Astrophysics Data System (ADS)

    Solheim, B. G. B.

    2009-12-01

    The European Modular Cultivation System (EMCS) on the ISS allows long-term biological experiments, e.g. on plants. Video cameras provide near real-time 2D images from these experiments. A method to obtain 3D coordinates and stereoscopic images from these 2D images has been developed and is described in this paper. The procedure was developed to enhance the data output of the MULTIGEN-1 experiment in 2007. One of the main objectives of the experiment was to study growth movements of the Arabidopsis plants and the effect of gravity on these. 3D data were important during parts of the experiment and the paper presents the method developed to acquire 3D data, the accuracy of the data, limitations to the technique and ways to improve the accuracy. Sequences of 3D data obtained from the MULTIGEN-1 experiment are used to illustrate the potential of this newfound capability of the EMCS. In the experiment setup, a positional depth accuracy of about ±0.4 mm for relative object distances and an absolute depth accuracy of about ±1.4 mm for time dependent phenomena was reached. The ability to both view biological specimens in 3D as well as obtaining quantitative 3D data added greatly to the scientific output of the MULTIGEN-1 experiment. The uses of the technique to other researchers and their experiments are discussed.

  10. 3D/2D model-to-image registration applied to TIPS surgery.

    PubMed

    Jomier, Julien; Bullitt, Elizabeth; Van Horn, Mark; Pathak, Chetna; Aylward, Stephen R

    2006-01-01

    We have developed a novel model-to-image registration technique which aligns a 3-dimensional model of vasculature with two semiorthogonal fluoroscopic projections. Our vascular registration method is used to intra-operatively initialize the alignment of a catheter and a preoperative vascular model in the context of image-guided TIPS (Transjugular, Intrahepatic, Portosystemic Shunt formation) surgery. Registration optimization is driven by the intensity information from the projection pairs at sample points along the centerlines of the model. Our algorithm shows speed, accuracy and consistency given clinical data.

  11. Vlasov simulation of 2D Modulational Instability of Ion Acoustic Waves and Prospects for Modeling such instabilities in Laser Propagation Codes

    NASA Astrophysics Data System (ADS)

    Berger, Richard; Chapman, T.; Banks, J. W.; Brunner, S.

    2015-11-01

    We present 2D+2V Vlasov simulations of Ion Acoustic waves (IAWs) driven by an external traveling-wave potential, ϕ0 (x , t) , with frequency, ω, and wavenumber, k, obeying the kinetic dispersion relation. Both electrons and ions are treated kinetically. Simulations with ϕ0 (x , t) , localized transverse to the propagation direction, model IAWs driven in a laser speckle. The waves bow with a positive or negative curvature of the wave fronts that depends on the sign of the nonlinear frequency shift ΔωNL , which is in turn determined by the magnitude of ZTe /Ti where Z is the charge state and Te , i is the electron, ion temperature. These kinetic effects result can cause modulational and self-focusing instabilities that transfer wave energy to kinetic energy. Linear dispersion properties of IAWs are used in laser propagation codes that predict the amount of light reflected by stimulated Brillouin scattering. At high enough amplitudes, the linear dispersion is invalid and these kinetic effects should be incorporated. Including the spatial and time scales of these instabilities is computationally prohibitive. We report progress including kinetic models in laser propagation codes. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344 and funded by the Laboratory Research and Development Program at LLNL under project tracking code 15.

  12. Membrane-mirror-based display for viewing 2D and 3D images

    NASA Astrophysics Data System (ADS)

    McKay, Stuart; Mason, Steven; Mair, Leslie S.; Waddell, Peter; Fraser, Simon M.

    1999-05-01

    Stretchable Membrane Mirrors (SMMs) have been developed at the University of Strathclyde as a cheap, lightweight and variable focal length alternative to conventional fixed- curvature glass based optics. A SMM uses a thin sheet of aluminized polyester film which is stretched over a specially shaped frame, forming an airtight cavity behind the membrane. Removal of air from that cavity causes the resulting air pressure difference to force the membrane back into a concave shape. Controlling the pressure difference acting over the membrane now controls the curvature or f/No. of the mirror. Mirrors from 0.15-m to 1.2-m in diameter have been constructed at the University of Strathclyde. The use of lenses and mirrors to project real images in space is perhaps one of the simplest forms of 3D display. When using conventional optics however, there are severe financial restrictions on what size of image forming element may be used, hence the appeal of a SMM. The mirrors have been used both as image forming elements and directional screens in volumetric, stereoscopic and large format simulator displays. It was found that the use of these specular reflecting surfaces greatly enhances the perceived image quality of the resulting magnified display.

  13. Learning-based roof style classification in 2D satellite images

    NASA Astrophysics Data System (ADS)

    Zang, Andi; Zhang, Xi; Chen, Xin; Agam, Gady

    2015-05-01

    Accurately recognizing building roof style leads to a much more realistic 3D building modeling and rendering. In this paper, we propose a novel system for image based roof style classification using machine learning technique. Our system is capable of accurately recognizing four individual roof styles and a complex roof which is composed of multiple parts. We make several novel contributions in this paper. First, we propose an algorithm that segments a complex roof to parts which enable our system to recognize the entire roof based on recognition of each part. Second, to better characterize a roof image, we design a new feature extracted from a roof edge image. We demonstrate that this feature has much better performance compared to recognition results generated by Histogram of Oriented Gradient (HOG), Scale-invariant Feature Transform (SIFT) and Local Binary Patterns (LBP). Finally, to generate a classifier, we propose a learning scheme that trains the classifier using both synthetic and real roof images. Experiment results show that our classifier performs well on several test collections.

  14. A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging.

    PubMed

    Gallio, Elena; Rampado, Osvaldo; Gianaria, Elena; Bianchi, Silvio Diego; Ropolo, Roberto

    2015-01-01

    Conventional radiology is performed by means of digital detectors, with various types of technology and different performance in terms of efficiency and image quality. Following the arrival of a new digital detector in a radiology department, all the staff involved should adapt the procedure parameters to the properties of the detector, in order to achieve an optimal result in terms of correct diagnostic information and minimum radiation risks for the patient. The aim of this study was to develop and validate a software capable of simulating a digital X-ray imaging system, using graphics processing unit computing. All radiological image components were implemented in this application: an X-ray tube with primary beam, a virtual patient, noise, scatter radiation, a grid and a digital detector. Three different digital detectors (two digital radiography and a computed radiography systems) were implemented. In order to validate the software, we carried out a quantitative comparison of geometrical and anthropomorphic phantom simulated images with those acquired. In terms of average pixel values, the maximum differences were below 15%, while the noise values were in agreement with a maximum difference of 20%. The relative trends of contrast to noise ratio versus beam energy and intensity were well simulated. Total calculation times were below 3 seconds for clinical images with pixel size of actual dimensions less than 0.2 mm. The application proved to be efficient and realistic. Short calculation times and the accuracy of the results obtained make this software a useful tool for training operators and dose optimisation studies.

  15. A GPU Simulation Tool for Training and Optimisation in 2D Digital X-Ray Imaging

    PubMed Central

    Gallio, Elena; Rampado, Osvaldo; Gianaria, Elena; Bianchi, Silvio Diego; Ropolo, Roberto

    2015-01-01

    Conventional radiology is performed by means of digital detectors, with various types of technology and different performance in terms of efficiency and image quality. Following the arrival of a new digital detector in a radiology department, all the staff involved should adapt the procedure parameters to the properties of the detector, in order to achieve an optimal result in terms of correct diagnostic information and minimum radiation risks for the patient. The aim of this study was to develop and validate a software capable of simulating a digital X-ray imaging system, using graphics processing unit computing. All radiological image components were implemented in this application: an X-ray tube with primary beam, a virtual patient, noise, scatter radiation, a grid and a digital detector. Three different digital detectors (two digital radiography and a computed radiography systems) were implemented. In order to validate the software, we carried out a quantitative comparison of geometrical and anthropomorphic phantom simulated images with those acquired. In terms of average pixel values, the maximum differences were below 15%, while the noise values were in agreement with a maximum difference of 20%. The relative trends of contrast to noise ratio versus beam energy and intensity were well simulated. Total calculation times were below 3 seconds for clinical images with pixel size of actual dimensions less than 0.2 mm. The application proved to be efficient and realistic. Short calculation times and the accuracy of the results obtained make this software a useful tool for training operators and dose optimisation studies. PMID:26545097

  16. Efficient image compression scheme based on differential coding

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Wang, Guoyou; Liu, Ying

    2007-11-01

    Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.

  17. Rank minimization code aperture design for spectrally selective compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2013-03-01

    A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.

  18. Heterogeneity of Particle Deposition by Pixel Analysis of 2D Gamma Scintigraphy Images

    PubMed Central

    Xie, Miao; Zeman, Kirby; Hurd, Harry; Donaldson, Scott

    2015-01-01

    Abstract Background: Heterogeneity of inhaled particle deposition in airways disease may be a sensitive indicator of physiologic changes in the lungs. Using planar gamma scintigraphy, we developed new methods to locate and quantify regions of high (hot) and low (cold) particle deposition in the lungs. Methods: Initial deposition and 24 hour retention images were obtained from healthy (n=31) adult subjects and patients with mild cystic fibrosis lung disease (CF) (n=14) following inhalation of radiolabeled particles (Tc99m-sulfur colloid, 5.4 μm MMAD) under controlled breathing conditions. The initial deposition image of the right lung was normalized to (i.e., same median pixel value), and then divided by, a transmission (Tc99m) image in the same individual to obtain a pixel-by-pixel ratio image. Hot spots were defined where pixel values in the deposition image were greater than 2X those of the transmission, and cold spots as pixels where the deposition image was less than 0.5X of the transmission. The number ratio (NR) of the hot and cold pixels to total lung pixels, and the sum ratio (SR) of total counts in hot pixels to total lung counts were compared between healthy and CF subjects. Other traditional measures of regional particle deposition, nC/P and skew of the pixel count histogram distribution, were also compared. Results: The NR of cold spots was greater in mild CF, 0.221±0.047(CF) vs. 0.186±0.038 (healthy) (p<0.005) and was significantly correlated with FEV1 %pred in the patients (R=−0.70). nC/P (central to peripheral count ratio), skew of the count histogram, and hot NR or SR were not different between the healthy and mild CF patients. Conclusions: These methods may provide more sensitive measures of airway function and localization of deposition that might be useful for assessing treatment efficacy in these patients. PMID:25393109

  19. Pre-stack depth migration for improved imaging under seafloor canyons: 2D case study of Browse Basin, Australia*

    NASA Astrophysics Data System (ADS)

    Debenham, Helen 124Westlake, Shane

    2014-06-01

    In the Browse Basin, as in many areas of the world, complex seafloor topography can cause problems with seismic imaging. This is related to complex ray paths, and sharp lateral changes in velocity. This paper compares ways in which 2D Kirchhoff imaging can be improved below seafloor canyons, using both time and depth domain processing. In the time domain, to improve on standard pre-stack time migration (PSTM) we apply removable seafloor static time shifts in order to reduce the push down effect under seafloor canyons before migration. This allows for better event continuity in the seismic imaging. However this approach does not fully solve the problem, still giving sub-optimal imaging, leaving amplitude shadows and structural distortion. Only depth domain processing with a migration algorithm that honours the paths of the seismic energy as well as a detailed velocity model can provide improved imaging under these seafloor canyons, and give confidence in the structural components of the exploration targets in this area. We therefore performed depth velocity model building followed by pre-stack depth migration (PSDM), the result of which provided a step change improvement in the imaging, and provided new insights into the area.

  20. A universal and ultrasensitive vectorial nanomechanical sensor for imaging 2D force fields

    NASA Astrophysics Data System (ADS)

    de Lépinay, Laure Mercier; Pigeau, Benjamin; Besga, Benjamin; Vincent, Pascal; Poncharal, Philippe; Arcizet, Olivier

    2016-10-01

    The miniaturization of force probes into nanomechanical oscillators enables ultrasensitive investigations of forces on dimensions smaller than their characteristic length scales. It also unravels the vectorial character of the force field and how its topology impacts the measurement. Here we present an ultrasensitive method for imaging two-dimensional vectorial force fields by optomechanically following the bidimensional Brownian motion of a singly clamped nanowire. This approach relies on angular and spectral tomography of its quasi-frequency-degenerated transverse mechanical polarizations: immersing the nanoresonator in a vectorial force field not only shifts its eigenfrequencies but also rotates the orientation of the eigenmodes, as a nanocompass. This universal method is employed to map a tunable electrostatic force field whose spatial gradients can even dominate the intrinsic nanowire properties. Enabling vectorial force field imaging with demonstrated sensitivities of attonewton variations over the nanoprobe Brownian trajectory will have a strong impact on scientific exploration at the nanoscale.

  1. Label free biochemical 2D and 3D imaging using secondary ion mass spectrometry.

    PubMed

    Fletcher, John S; Vickerman, John C; Winograd, Nicholas

    2011-10-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) provides a method for the detection of native and exogenous compounds in biological samples on a cellular scale. Through the development of novel ion beams the amount of molecular signal available from the sample surface has been increased. Through the introduction of polyatomic ion beams, particularly C(60), ToF-SIMS can now be used to monitor molecular signals as a function of depth as the sample is eroded thus proving the ability to generate 3D molecular images. Here we describe how this new capability has led to the development of novel instrumentation for 3D molecular imaging while also highlighting the importance of sample preparation and discuss the challenges that still need to be overcome to maximise the impact of the technique.

  2. Region-Based Feature Interpretation for Recognizing 3D Models in 2D images

    DTIC Science & Technology

    1991-06-01

    Likewise, if two model lines are colinear or are connected at their endpoints, they must do the same in the image (again, within some bounds, to account...not well defined. Is a flowerpot part of the plant object? The answer depends on the vision task, and even then may be ambiguous or allow overlapping...However, not all have been tried, either in psychological tests or in vision systems. Proximity: Features are close to each other. Edge Connectivity

  3. Distributed computing architecture for image-based wavefront sensing and 2D FFTs

    NASA Astrophysics Data System (ADS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-06-01

    Image-based wavefront sensing provides significant advantages over interferometric-based wavefront sensors such as optical design simplicity and stability. However, the image-based approach is computationally intensive, and therefore, applications utilizing the image-based approach gain substantial benefits using specialized high-performance computing architectures. The development and testing of these computing architectures are essential to missions such as James Webb Space Telescope (JWST), Terrestrial Planet Finder-Coronagraph (TPF-C and CorSpec), and the Spherical Primary Optical Telescope (SPOT). The algorithms implemented on these specialized computing architectures make use of numerous two-dimensional Fast Fourier Transforms (FFTs) which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of digital signal processors (DSPs) and multiple DSP field programmable gate arrays (FPGAs), offering a novel application of low-diameter graph theory. Timing results and performance analysis are presented. The solutions offered could be applied to other computationally complex all-to-all communication problems.

  4. Double-channel, frequency-steered acoustic transducer with 2-D imaging capabilities.

    PubMed

    Baravelli, Emanuele; Senesi, Matteo; Ruzzene, Massimo; De Marchi, Luca; Speciale, Nicolò

    2011-07-01

    A frequency-steerable acoustic transducer (FSAT) is employed for imaging of damage in plates through guided wave inspection. The FSAT is a shaped array with a spatial distribution that defines a spiral in wavenumber space. Its resulting frequency-dependent directional properties allow beam steering to be performed by a single two-channel device, which can be used for the imaging of a two-dimensional half-plane. Ad hoc signal processing algorithms are developed and applied to the localization of acoustic sources and scatterers when FSAT arrays are used as part of pitch-catch and pulse-echo configurations. Localization schemes rely on the spectrogram analysis of received signals upon dispersion compensation through frequency warping and the application of the frequency-angle map characteristic of FSAT. The effectiveness of FSAT designs and associated imaging schemes are demonstrated through numerical simulations and experiments. Preliminary experimental validation is performed by forming a discrete array through the points of the measurement grid of a scanning laser Doppler vibrometer. The presented results demonstrate the frequency-dependent directionality of the spiral FSAT and suggest its application for frequency-selective acoustic sensors, for the localization of broadband acoustic events, or for the directional generation of Lamb waves for active interrogation of structural health.

  5. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  6. Practicing the Code of Ethics, finding the image of God.

    PubMed

    Hoglund, Barbara A

    2013-01-01

    The Code of Ethics for Nurses gives a professional obligation to practice in a compassionate and respectful way that is unaffected by the attributes of the patient. This article explores the concept "made in the image of God" and the complexities inherent in caring for those perceived as exhibiting distorted images of God. While the Code provides a professional standard consistent with a biblical worldview, human nature impacts the ability to consistently act congruently with the Code. Strategies and nursing interventions that support development of practice from a biblical worldview and the Code of Ethics for Nurses are presented.

  7. 2-D Precise Radiation Mapping of Sedimentary Core Using Imaging Plate

    NASA Astrophysics Data System (ADS)

    Sugihara, M.; Tsuchiya, N.

    2006-12-01

    The imaging plate (IP) is a storage film coated with photostimulated phosphor (BaFBr: Eu2+), and the latent images produced by irradiation of the imaging plate are read by superficial scanning with stimulation light and are reconstructed as two-dimensional dot images on a computer display. It has an excellent performance for radiation detection, and its advantages include an ease of use, a high position resolution (up to 25ƒÊm), a large detection area (up to 35'43cm2), a high detection sensitivity with high signal-to- noise ratio, an extremely wide dynamic range of dose, a sensitivity to several kinds of radiation, and an erasing capability for reuse (Hareyama et al., 2000). In this study, in order to develop a nondestructive, precise and large area evaluation method of sedimentary structure, an application of autoradiography using IP is attempted to marine sediments. Imaging plate (BAS-MS2040 Fujifilm Co. Ltd., 20'~40 cm2) was cut into rectangular five pieces (4'~40 cm2). Whole round marine sedimentary cores were divided into two half for duplicate and they were covered with a plastic wrap. The rectangular IP were put along the center line of plane side of half round. The exposure in the low temperature was for 48 hours in a shield box. The latent images produced by irradiation of the IP were read out by using the BAS-2500 imaging analyzer (Fujifilm Co. Ltd.). Radiation dose of IP is output as PSL value, that is unique dose units and quantities of IP system. Position resolution was set to 50ƒÊm. Marine sedimentary cores including volcanic ash layer were measured using IP and Natural Gamma Logger (NGL), which is measuring instrument for marine sediments in practice use, to compare their measuring ability. As a result of experiment, it becomes clear that high dose distribution is found at volcanic ash layer with IP, meanwhile it can't be found with NGL. The content of radiation source in volcanic ash layer is supposed to be high compared with other layers

  8. Automatic Evaluation of Scan Adequacy and Dysplasia Metrics in 2-D Ultrasound Images of the Neonatal Hip.

    PubMed

    Quader, Niamul; Hodgson, Antony J; Mulpuri, Kishore; Schaeffer, Emily; Abugharbieh, Rafeef

    2017-03-21

    Ultrasound (US) imaging of an infant's hip joint is widely used for early detection of developmental dysplasia of the hip. In current US-based diagnosis of developmental dysplasia of the hip, trained clinicians acquire US images and, if they judge them to be adequate (i.e., to contain relevant hip joint structures), analyze them manually to extract clinically useful dysplasia metrics. However, both the scan adequacy classification and dysplasia metrics extraction steps exhibit significant variability within and between both clinicians and institutions, which can result in significant over- and undertreatment rates. To reduce the subjectivity resulting from this variability, we propose a computational image analysis technique that automatically identifies adequate images and subsequently extracts dysplasia metrics from these 2-D US images. Our automatic method uses local phase symmetry-based image measures to robustly identify intensity-invariant geometric features of bone/cartilage boundaries from the US images. Using the extracted geometric features, we trained a random forest classifier to classify images as adequate or inadequate, and in the adequate images we used a subset of the geometric features to calculate key dysplasia metrics. We validated our method on a data set of 693 US scans collected from 35 infants. Our approach produces excellent agreement with clinician adequacy classifications (area under the receiver operating characteristic curve = 0.985) and in reducing variability in the measured developmental dysplasia of the hip metrics (p < 0.05). The automatically computed dysplasia metrics appear to be slightly biased toward higher Graf categories than the manually estimated metrics, which could potentially reduce missed early diagnoses.

  9. Validation of TITAN2D flow model code for pyroclastic flows and debris avalanches at Soufrière Hills Volcano, Montserrat, BWI

    NASA Astrophysics Data System (ADS)

    Widiwijayanti, C.; Voight, B.; Hidayat, D.; Patra, A.; Pitman, E.

    2004-12-01

    Soufrière Hills Volcano (SHV), Montserrat, has experienced numerous episodes of dome collapses since 1996. They range from relatively small rockfalls to major dome collapses, several >10x106 m3, and one >100x106 m3 (Calder, Luckett, Sparks and Voight 2002; Voight et al. 2002). The hazard implications for such events are significant at both local and regional scales, and include pyroclastic surges, explosions, and tsunami. Problems arise in forecasting and hazards mitigation, particularly in zoning for populated areas. Determining the likely extent of flow deposits is important for hazard zonation. For this, detailed mapping (topography of source areas and paths, material properties, structure, track roughness and erosion) has an important role, giving clues on locations of future collapse and runout paths. Here we present an application of a numerical computation model of geophysical mass flow using the TITAN2D code (Patra et al. 2004; Pitman et al. 2004), to simulate dome collapses at SHV. The majority of collapse-type pyroclastic flows at SHV are consistent with an initiation by gravitational collapse of oversteepened flanks of the dome. If the gravity controls the energy for such processes, then the flow tracks can be predicted on the basis of topography, and friction influences runout. TITAN2D is written to simulate this type of volcanic flow, and the SHV database is used to validate the code and provide calibrated data on friction properties. The topographic DEM was successively updated by adding flow deposit thicknesses for previous collapses. Simulation results were compared to observed flow parameters, including flow path, deposit volume, duration, velocity, and runout distance of individual flows, providing calibration data on internal and bed friction, and demonstrating the validity and limitations of such modeling for practical volcanic hazard assessment.

  10. 2D and 3D GPR imaging of structural ceilings in historic and existing constructions

    NASA Astrophysics Data System (ADS)

    Colla, Camilla

    2014-05-01

    GPR applications in civil engineering are to date quite diversified. With respect to civil constructions and monumental buildings, detection of voids, cavities, layering in structural elements, variation of geometry, of moisture content, of materials, areas of decay, defects, cracks have been reported in timber, concrete and masonry elements. Nonetheless, many more fields of investigation remain unexplored. This contribution gives an account of a variety of examples of structural ceilings investigation by GPR radar in reflection mode, either as 2D or 3D data acquisition and visualisation. Ceilings have a pre-eminent role in buildings as they contribute to a good structural behaviour of the construction. Primarily, the following functions can be listed for ceilings: a) they carry vertical dead and live loads on floors and distribute such loads to the vertical walls; b) they oppose to external horizontal forces such as wind loads and earthquakes helping to transfer such forces from the loaded element to the other walls; c) they contribute to create the box skeleton and behaviour of a building, connecting the different load bearing walls and reducing the slenderness and flexural instability of such walls. Therefore, knowing how ceilings are made in specific buildings is of paramount importance for architects and structural engineers. According to the type of building and age of construction, ceilings may present very different solutions and materials. Moreover, in existing constructions, ceilings may have been substituted, modified or strengthened due to material decay or to change of use of the building. These alterations may often go unrecorded in technical documentation or technical drawings may be unavailable. In many cases, the position, orientation and number of the load carrying elements in ceilings may be hidden or not be in sight, due for example to the presence of false ceilings or to technical plants. GPR radar can constitute a very useful tool for

  11. Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications

    SciTech Connect

    Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth

    2013-06-01

    Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.

  12. A Stochastic Hill Climbing Approach for Simultaneous 2D Alignment and Clustering of Cryogenic Electron Microscopy Images.

    PubMed

    Reboul, Cyril F; Bonnet, Frederic; Elmlund, Dominika; Elmlund, Hans

    2016-06-07

    A critical step in the analysis of novel cryogenic electron microscopy (cryo-EM) single-particle datasets is the identification of homogeneous subsets of images. Methods for solving this problem are important for data quality assessment, ab initio 3D reconstruction, and analysis of population diversity due to the heterogeneous nature of macromolecules. Here we formulate a stochastic algorithm for identification of homogeneous subsets of images. The purpose of the method is to generate improved 2D class averages that can be used to produce a reliable 3D starting model in a rapid and unbiased fashion. We show that our method overcomes inherent limitations of widely used clustering approaches and proceed to test the approach on six publicly available experimental cryo-EM datasets. We conclude that, in each instance, ab initio 3D reconstructions of quality suitable for initialization of high-resolution refinement are produced from the cluster centers.

  13. 2D and 3D Terahertz Imaging and X-Rays CT for Sigillography Study

    NASA Astrophysics Data System (ADS)

    Fabre, M.; Durand, R.; Bassel, L.; Recur, B.; Balacey, H.; Bou Sleiman, J.; Perraud, J.-B.; Mounaix, P.

    2017-04-01

    Seals are part of our cultural heritage but the study of these objects is limited because of their fragility. Terahertz and X-Ray imaging are used to analyze a collection of wax seals from the fourteenth to eighteenth centuries. In this work, both techniques are compared in order to discuss their advantages and limits and their complementarity for conservation state study of the samples. Thanks to 3D analysis and reconstructions, defects and fractures are detected with an estimation of their depth position. The path from the parchment tongue inside the seals is also detected.

  14. Application of Compressed Sensing to 2-D Ultrasonic Propagation Imaging System data

    SciTech Connect

    Mascarenas, David D.; Farrar, Charles R.; Chong, See Yenn; Lee, J.R.; Park, Gyu Hae; Flynn, Eric B.

    2012-06-29

    The Ultrasonic Propagation Imaging (UPI) System is a unique, non-contact, laser-based ultrasonic excitation and measurement system developed for structural health monitoring applications. The UPI system imparts laser-induced ultrasonic excitations at user-defined locations on a structure of interest. The response of these excitations is then measured by piezoelectric transducers. By using appropriate data reconstruction techniques, a time-evolving image of the response can be generated. A representative measurement of a plate might contain 800x800 spatial data measurement locations and each measurement location might be sampled at 500 instances in time. The result is a total of 640,000 measurement locations and 320,000,000 unique measurements. This is clearly a very large set of data to collect, store in memory and process. The value of these ultrasonic response images for structural health monitoring applications makes tackling these challenges worthwhile. Recently compressed sensing has presented itself as a candidate solution for directly collecting relevant information from sparse, high-dimensional measurements. The main idea behind compressed sensing is that by directly collecting a relatively small number of coefficients it is possible to reconstruct the original measurement. The coefficients are obtained from linear combinations of (what would have been the original direct) measurements. Often compressed sensing research is simulated by generating compressed coefficients from conventionally collected measurements. The simulation approach is necessary because the direct collection of compressed coefficients often requires compressed sensing analog front-ends that are currently not commercially available. The ability of the UPI system to make measurements at user-defined locations presents a unique capability on which compressed measurement techniques may be directly applied. The application of compressed sensing techniques on this data holds the potential to

  15. 2D and 3D Terahertz Imaging and X-Rays CT for Sigillography Study

    NASA Astrophysics Data System (ADS)

    Fabre, M.; Durand, R.; Bassel, L.; Recur, B.; Balacey, H.; Bou Sleiman, J.; Perraud, J.-B.; Mounaix, P.

    2017-01-01

    Seals are part of our cultural heritage but the study of these objects is limited because of their fragility. Terahertz and X-Ray imaging are used to analyze a collection of wax seals from the fourteenth to eighteenth centuries. In this work, both techniques are compared in order to discuss their advantages and limits and their complementarity for conservation state study of the samples. Thanks to 3D analysis and reconstructions, defects and fractures are detected with an estimation of their depth position. The path from the parchment tongue inside the seals is also detected.

  16. High-accuracy 2D digital image correlation measurements using low-cost imaging lenses: implementation of a generalized compensation method

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Yu, Liping; Wu, Dafang

    2014-02-01

    The ideal pinhole imaging model commonly assumed for an ordinary two-dimensional digital image correlation (2D-DIC) system is neither perfect nor stable because of the existence of small out-of-plane motion of the test sample surface that occurred after loading, small out-of-plane motion of the sensor target due to temperature variation of a camera and unavoidable geometric distortion of an imaging lens. In certain cases, these disadvantages can lead to significant errors in the measured displacements and strains. Although a high-quality bilateral telecentric lens has been strongly recommended to be used in the 2D-DIC system as an essential optical component to achieve high-accuracy measurement, it is not generally applicable due to its fixed field of view, limited depth of focus and high cost. To minimize the errors associated with the imperfectness and instability of a common 2D-DIC system using a low-cost imaging lens, a generalized compensation method using a non-deformable reference sample is proposed in this work. With the proposed method, the displacement of the reference sample rigidly attached behind the test sample is first measured using 2D-DIC, and then it is fitted using a parametric model. The fitted parametric model is then used to correct the displacements of the deformed sample to remove the influences of these unfavorable factors. The validity of the proposed compensation method is first verified using out-of-plane translation, out-of-plane rotation, in-plane translation tests and their combinations. Uniaxial tensile tests of an aluminum specimen were also performed to quantitatively examine the strain accuracy of the proposed compensation method. Experiments show that the proposed compensation method is an easy-to-implement yet effective technique for achieving high-accuracy deformation measurement using an ordinary 2D-DIC system.

  17. Serial grouping of 2D-image regions with object-based attention in humans

    PubMed Central

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-01-01

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas. DOI: http://dx.doi.org/10.7554/eLife.14320.001 PMID:27291188

  18. Performance of a 2D image-based anthropometric measurement and clothing sizing system.

    PubMed

    Meunier, P; Yin, S

    2000-10-01

    Two-dimensional, image-based anthropometric measurement systems offer an interesting alternative to traditional and three-dimensional methods in applications such as clothing sizing. These automated systems are attractive because of their low cost and the speed with which they can measure size and determine the best-fitting garment. Although these systems have appeal in this type of application, not much is known about the accuracy and precision of the measurements they take. In this paper, the performance of one such system was assessed. The accuracy of the system was analyzed using a database of 349 subjects (male and female) who were also measured with traditional anthropometric tools and techniques, and the precision was estimated through repeated measurements of both a plastic mannequin and a human subject. The results of the system were compared with those of trained anthropometrists, and put in perspective relative to clothing sizing requirements and short-term body changes. It was concluded that image-based systems are capable of providing anthropometric measurements that are quite comparable to traditional measurement methods (performed by skilled measurers), both in terms of accuracy and repeatability.

  19. Tangential 2-D Edge Imaging for GPI and Edge/Impurity Modeling

    SciTech Connect

    Dr. Ricardo Maqueda; Dr. Fred M. Levinton

    2011-12-23

    Nova Photonics, Inc. has a collaborative effort at the National Spherical Torus Experiment (NSTX). This collaboration, based on fast imaging of visible phenomena, has provided key insights on edge turbulence, intermittency, and edge phenomena such as edge localized modes (ELMs) and multi-faceted axisymmetric radiation from the edge (MARFE). Studies have been performed in all these areas. The edge turbulence/intermittency studies make use of the Gas Puff Imaging diagnostic developed by the Principal Investigator (Ricardo Maqueda) together with colleagues from PPPL. This effort is part of the International Tokamak Physics Activity (ITPA) edge, scrape-off layer and divertor group joint activity (DSOL-15: Inter-machine comparison of blob characteristics). The edge turbulence/blob study has been extended from the current location near the midplane of the device to the lower divertor region of NSTX. The goal of this effort was to study turbulence born blobs in the vicinity of the X-point region and their circuit closure on divertor sheaths or high density regions in the divertor. In the area of ELMs and MARFEs we have studied and characterized the mode structure and evolution of the ELM types observed in NSTX, as well as the study of the observed interaction between MARFEs and ELMs. This interaction could have substantial implications for future devices where radiative divertor regions are required to maintain detachment from the divertor plasma facing components.

  20. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  1. Intensifying the response of distributed optical fibre sensors using 2D and 3D image restoration

    PubMed Central

    Soto, Marcelo A.; Ramírez, Jaime A.; Thévenaz, Luc

    2016-01-01

    Distributed optical fibre sensors possess the unique capability of measuring the spatial and temporal map of environmental quantities that can be of great interest for several field applications. Although existing methods for performance enhancement have enabled important progresses in the field, they do not take full advantage of all information present in the measured data, still giving room for substantial improvement over the state-of-the-art. Here we propose and experimentally demonstrate an approach for performance enhancement that exploits the high level of similitude and redundancy contained on the multidimensional information measured by distributed fibre sensors. Exploiting conventional image and video processing, an unprecedented boost in signal-to-noise ratio and measurement contrast is experimentally demonstrated. The method can be applied to any white-noise-limited distributed fibre sensor and can remarkably provide a 100-fold improvement in the sensor performance with no hardware modification. PMID:26927698

  2. [EOS imaging acquisition system : 2D/3D diagnostics of the skeleton].

    PubMed

    Tarhan, T; Froemel, D; Meurer, A

    2015-12-01

    The application spectrum of the EOS imaging acquisition system is versatile. It is especially useful in the diagnostics and planning of corrective surgical procedures in complex orthopedic cases. The application is indicated when assessing deformities and malpositions of the spine, pelvis and lower extremities. It can also be used in the assessment and planning of hip and knee arthroplasty. For the first time physicians have the opportunity to conduct examinations of the whole body under weight-bearing conditions in order to anticipate the effects of a planned surgical procedure on the skeletal system as a whole and therefore on the posture of the patient. Compared to conventional radiographic examination techniques, such as x-ray or computed tomography, the patient is exposed to much less radiation. Therefore, the pediatric application of this technique can be described as reasonable.

  3. New generation CMOS 2D imager evaluation and qualification for semiconductor inspection applications

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Hart, Darcy

    2013-09-01

    Semiconductor fabrication process defect inspection industry is always driven by inspection resolution and through-put. With fabrication technology node advances to 2X ~1Xnm range, critical macro defect size approaches to typical CMOS camera pixel size range, therefore single pixel defect detection technology becomes more and more essential, which is fundamentally constrained by camera performance. A new evaluation model is presented here to specifically describe the camera performance for semiconductor machine vision applications, especially targeting at low image contrast high speed applications. Current mainline cameras and high-end OEM cameras are evaluated with this model. Camera performances are clearly differentiated among CMOS technology generations and vendors, which will facilitate application driven camera selection and operation optimization. The new challenges for CMOS detectors are discussed for semiconductor inspection applications.

  4. Impact of lens distortions on strain measurements obtained with 2D digital image correlation

    NASA Astrophysics Data System (ADS)

    Lava, P.; Van Paepegem, W.; Coppieters, S.; De Baere, I.; Wang, Y.; Debruyne, D.

    2013-05-01

    The determination of strain fields based on displacements obtained via digital image correlation (DIC) at the micro-strain level (≤1000 μm/m) is still a cumbersome task. In particular when high-strain gradients are involved, e.g. in composite materials with multidirectional fibre reinforcement, uncertainties in the experimental setup and errors in the derivation of the displacement fields can substantially hamper the strain identification process. In this contribution, the aim is to investigate the impact of lens distortions on strain measurements. To this purpose, we first perform pure rigid body motion experiments, revealing the importance of precise correction of lens distortions. Next, a uni-axial tensile test on a textile composite with spatially varying high strain gradients is performed, resulting in very accurately determined strains along the fibers of the material.

  5. Icarus: A 2D direct simulation Monte Carlo (DSMC) code for parallel computers. User`s manual - V.3.0

    SciTech Connect

    Bartel, T.; Plimpton, S.; Johannes, J.; Payne, J.

    1996-10-01

    Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modelled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modelled using steric factors derived from Arrhenius reaction rates. Surface chemistry is modelled with surface reaction probabilities. The electron number density is either a fixed external generated field or determined using a local charge neutrality assumption. Ion chemistry is modelled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electrostatic fields can either be externally input or internally generated using a Langmuir-Tonks model. The Icarus software package includes the grid generation, parallel processor decomposition, postprocessing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. The majority of the software packages are written in standard Fortran.

  6. Hybrid coded aperture and Compton imaging using an active mask

    NASA Astrophysics Data System (ADS)

    Schultz, L. J.; Wallace, M. S.; Galassi, M. C.; Hoover, A. S.; Mocko, M.; Palmer, D. M.; Tornga, S. R.; Kippen, R. M.; Hynes, M. V.; Toolin, M. J.; Harris, B.; McElroy, J. E.; Wakeford, D.; Lanza, R. C.; Horn, B. K. P.; Wehe, D. K.

    2009-09-01

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  7. Fast 2-D ultrasound strain imaging: the benefits of using a GPU.

    PubMed

    Idzenga, Tim; Gaburov, Evghenii; Vermin, Willem; Menssen, Jan; de Korte, Chris

    2014-01-01

    Deformation of tissue can be accurately estimated from radio-frequency ultrasound data using a 2-dimensional normalized cross correlation (NCC)-based algorithm. This procedure, however, is very computationally time-consuming. A major time reduction can be achieved by parallelizing the numerous computations of NCC. In this paper, two approaches for parallelization have been investigated: the OpenMP interface on a multi-CPU system and Compute Unified Device Architecture (CUDA) on a graphics processing unit (GPU). The performance of the OpenMP and GPU approaches were compared with a conventional Matlab implementation of NCC. The OpenMP approach with 8 threads achieved a maximum speed-up factor of 132 on the computing of NCC, whereas the GPU approach on an Nvidia Tesla K20 achieved a maximum speed-up factor of 376. Neither parallelization approach resulted in a significant loss in image quality of the elastograms. Parallelization of the NCC computations using the GPU, therefore, significantly reduces the computation time and increases the frame rate for motion estimation.

  8. On non-invasive 2D and 3D Chromatic White Light image sensors for age determination of latent fingerprints.

    PubMed

    Merkel, Ronny; Gruhn, Stefan; Dittmann, Jana; Vielhauer, Claus; Bräutigam, Anja

    2012-10-10

    The feasibility of 2D-intensity and 3D-topography images from a non-invasive Chromatic White Light (CWL) sensor for the age determination of latent fingerprints is investigated. The proposed method might provide the means to solve the so far unresolved issue of determining a fingerprints age in forensics. Conducting numerous experiments for an indoor crime scene using selected surfaces, different influences on the aging of fingerprints are investigated and the resulting aging variability is determined in terms of inter-person, intra-person, inter-finger and intra-finger variation. Main influence factors are shown to be the sweat composition, temperature, humidity, wind, UV-radiation, surface type, contamination of the finger with water-containing substances, resolution and measured area size, whereas contact time, contact pressure and smearing of the print seem to be of minor importance. Such influences lead to a certain experimental variability in inter-person and intra-person variation, which is higher than the inter-finger and intra-finger variation. Comparing the aging behavior of 17 different features using 1490 time series with a total of 41,520 fingerprint images, the great potential of the CWL technique in combination with the binary pixel feature from prior work is shown. Performing three different experiments for the classification of fingerprints into the two time classes [0, 5 h] and [5, 24 h], a maximum classification performance of 79.29% (kappa=0.46) is achieved for a general case, which is further improved for special cases. The statistical significance of the two best-performing features (both binary pixel versions based on 2D-intensity images) is manually shown and a feature fusion is performed, highlighting the strong dependency of the features on each other. It is concluded that such method might be combined with additional capturing devices, such as microscopes or spectroscopes, to a very promising age estimation scheme.

  9. Perceptually lossless coding of digital monochrome ultrasound images

    NASA Astrophysics Data System (ADS)

    Wu, David; Tan, Damian M.; Griffiths, Tania; Wu, Hong Ren

    2005-07-01

    A preliminary investigation of encoding monochrome ultrasound images with a novel perceptually lossless coder is presented. Based on the JPEG 2000 coding framework, the proposed coder employs a vision model to identify and remove visually insignificant/irrelevant information. Current simulation results have shown coding performance gains over the JPEG compliant LOCO lossless and JPEG 2000 lossless coders without any perceivable distortion.

  10. Normal and shear strain imaging using 2D deformation tracking on beam steered linear array datasets

    PubMed Central

    Xu, Haiyan; Varghese, Tomy

    2013-01-01

    Purpose: Previous publications have reported on the use of one-dimensional cross-correlation analysis with beam-steered echo signals. However, this approach fails to accurately track displacements at larger depths (>4.5 cm) due to lower signal-to-noise. In this paper, the authors present the use of adaptive parallelogram shaped two-dimensional processing blocks for deformation tracking. Methods: Beam-steered datasets were acquired using a VFX 9L4 linear array transducer operated at a 6 MHz center frequency for steered angles from −15 to 15° in increments of 1°, on both uniformly elastic and single-inclusion tissue-mimicking phantoms. Echo signals were acquired to a depth of 65 mm with the focus set at 40 mm corresponding to the center of phantom. Estimated angular displacements along and perpendicular to the beam direction are used to compute axial and lateral displacement vectors using a least-squares approach. Normal and shear strain tensor component are then estimated based on these displacement vectors. Results: Their results demonstrate that parallelogram shaped two-dimensional deformation tracking significantly improves spatial resolution (factor of 7.79 along the beam direction), signal-to-noise (5 dB improvement), and contrast-to-noise (8–14 dB improvement) associated with strain imaging using beam steering on linear array transducers. Conclusions: Parallelogram shaped two-dimensional deformation tracking is demonstrated in beam-steered radiofrequency data, enabling its use in the estimation of normal and shear strain components. PMID:23298118

  11. Registration of dynamic multiview 2D ultrasound and late gadolinium enhanced images of the heart: Application to hypertrophic cardiomyopathy characterization.

    PubMed

    Betancur, Julián; Simon, Antoine; Halbert, Edgar; Tavard, François; Carré, François; Hernández, Alfredo; Donal, Erwan; Schnell, Frédéric; Garreau, Mireille

    2016-02-01

    Describing and analyzing heart multiphysics requires the acquisition and fusion of multisensor cardiac images. Multisensor image fusion enables a combined analysis of these heterogeneous modalities. We propose to register intra-patient multiview 2D+t ultrasound (US) images with multiview late gadolinium-enhanced (LGE) images acquired during cardiac magnetic resonance imaging (MRI), in order to fuse mechanical and tissue state information. The proposed procedure registers both US and LGE to cine MRI. The correction of slice misalignment and the rigid registration of multiview LGE and cine MRI are studied, to select the most appropriate similarity measure. It showed that mutual information performs the best for LGE slice misalignment correction and for LGE and cine registration. Concerning US registration, dynamic endocardial contours resulting from speckle tracking echocardiography were exploited in a geometry-based dynamic registration. We propose the use of an adapted dynamic time warping procedure to synchronize cardiac dynamics in multiview US and cine MRI. The registration of US and LGE MRI was evaluated on a dataset of patients with hypertrophic cardiomyopathy. A visual assessment of 330 left ventricular regions from US images of 28 patients resulted in 92.7% of regions successfully aligned with cardiac structures in LGE. Successfully-aligned regions were then used to evaluate the abilities of strain indicators to predict the presence of fibrosis. Longitudinal peak-strain and peak-delay of aligned left ventricular regions were computed from corresponding regional strain curves from US. The Mann-Withney test proved that the expected values of these indicators change between the populations of regions with and without fibrosis (p < 0.01). ROC curves otherwise proved that the presence of fibrosis is one factor amongst others which modifies longitudinal peak-strain and peak-delay.

  12. The cone penetration test and 2D imaging resistivity as tools to simulate the distribution of hydrocarbons in soil

    NASA Astrophysics Data System (ADS)

    Pérez-Corona, M.; García, J. A.; Taller, G.; Polgár, D.; Bustos, E.; Plank, Z.

    2016-02-01

    The purpose of geophysical electrical surveys is to determine the subsurface resistivity distribution by making measurements on the ground surface. From these measurements, the true resistivity of the subsurface can be estimated. The ground resistivity is related to various geological parameters, such as the mineral and fluid content, porosity and degree of water saturation in the rock. Electrical resistivity surveys have been used for many decades in hydrogeological, mining and geotechnical investigations. More recently, they have been used for environmental surveys. To obtain a more accurate subsurface model than is possible with a simple 1-D model, a more complex model must be used. In a 2-D model, the resistivity values are allowed to vary in one horizontal direction (usually referred to as the x direction) but are assumed to be constant in the other horizontal (the y) direction. A more realistic model would be a fully 3-D model where the resistivity values are allowed to change in all three directions. In this research, a simulation of the cone penetration test and 2D imaging resistivity are used as tools to simulate the distribution of hydrocarbons in soil.

  13. Constraining Polarized Foregrounds for EoR Experiments I: 2D Power Spectra from the PAPER-32 Imaging Array

    NASA Astrophysics Data System (ADS)

    Kohn, S. A.; Aguirre, J. E.; Nunhokee, C. D.; Bernardi, G.; Pober, J. C.; Ali, Z. S.; Bradley, R. F.; Carilli, C. L.; DeBoer, D. R.; Gugliucci, N. E.; Jacobs, D. C.; Klima, P.; MacMahon, D. H. E.; Manley, J. R.; Moore, D. F.; Parsons, A. R.; Stefan, I. I.; Walbrugh, W. P.

    2016-06-01

    Current generation low-frequency interferometers constructed with the objective of detecting the high-redshift 21 cm background aim to generate power spectra of the brightness temperature contrast of neutral hydrogen in primordial intergalactic medium. Two-dimensional (2D) power spectra (power in Fourier modes parallel and perpendicular to the line of sight) that formed from interferometric visibilities have been shown to delineate a boundary between spectrally smooth foregrounds (known as the wedge) and spectrally structured 21 cm background emission (the EoR window). However, polarized foregrounds are known to possess spectral structure due to Faraday rotation, which can leak into the EoR window. In this work we create and analyze 2D power spectra from the PAPER-32 imaging array in Stokes I, Q, U, and V. These allow us to observe and diagnose systematic effects in our calibration at high signal-to-noise within the Fourier space most relevant to EoR experiments. We observe well-defined windows in the Stokes visibilities, with Stokes Q, U, and V power spectra sharing a similar wedge shape to that seen in Stokes I. With modest polarization calibration, we see no evidence that polarization calibration errors move power outside the wedge in any Stokes visibility to the noise levels attained. Deeper integrations will be required to confirm that this behavior persists to the depth required for EoR detection.

  14. Leaf Area Index Estimation in Vineyards from Uav Hyperspectral Data, 2d Image Mosaics and 3d Canopy Surface Models

    NASA Astrophysics Data System (ADS)

    Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.

    2015-08-01

    The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.

  15. Experimental validation of equations for 2D DIC uncertainty quantification.

    SciTech Connect

    Reu, Phillip L.; Miller, Timothy J.

    2010-03-01

    Uncertainty quantification (UQ) equations have been derived for predicting matching uncertainty in two-dimensional image correlation a priori. These equations include terms that represent the image noise and image contrast. Researchers at the University of South Carolina have extended previous 1D work to calculate matching errors in 2D. These 2D equations have been coded into a Sandia National Laboratories UQ software package to predict the uncertainty for DIC images. This paper presents those equations and the resulting error surfaces for trial speckle images. Comparison of the UQ results with experimentally subpixel-shifted images is also discussed.

  16. Coded Excitation Plane Wave Imaging for Shear Wave Motion Detection

    PubMed Central

    Song, Pengfei; Urban, Matthew W.; Manduca, Armando; Greenleaf, James F.; Chen, Shigao

    2015-01-01

    Plane wave imaging has greatly advanced the field of shear wave elastography thanks to its ultrafast imaging frame rate and the large field-of-view (FOV). However, plane wave imaging also has decreased penetration due to lack of transmit focusing, which makes it challenging to use plane waves for shear wave detection in deep tissues and in obese patients. This study investigated the feasibility of implementing coded excitation in plane wave imaging for shear wave detection, with the hypothesis that coded ultrasound signals can provide superior detection penetration and shear wave signal-to-noise-ratio (SNR) compared to conventional ultrasound signals. Both phase encoding (Barker code) and frequency encoding (chirp code) methods were studied. A first phantom experiment showed an approximate penetration gain of 2-4 cm for the coded pulses. Two subsequent phantom studies showed that all coded pulses outperformed the conventional short imaging pulse by providing superior sensitivity to small motion and robustness to weak ultrasound signals. Finally, an in vivo liver case study on an obese subject (Body Mass Index = 40) demonstrated the feasibility of using the proposed method for in vivo applications, and showed that all coded pulses could provide higher SNR shear wave signals than the conventional short pulse. These findings indicate that by using coded excitation shear wave detection, one can benefit from the ultrafast imaging frame rate and large FOV provided by plane wave imaging while preserving good penetration and shear wave signal quality, which is essential for obtaining robust shear elasticity measurements of tissue. PMID:26168181

  17. 2D image of local density and magnetic fluctuations from line-integrated interferometry-polarimetry measurements

    NASA Astrophysics Data System (ADS)

    Lin, L.; Ding, W. X.; Brower, D. L.

    2014-11-01

    Combined polarimetry-interferometry capability permits simultaneous measurement of line-integrated density and Faraday effect with fast time response (˜1 μs) and high sensitivity. Faraday effect fluctuations with phase shift of order 0.05° associated with global tearing modes are resolved with an uncertainty ˜0.01°. For physics investigations, local density fluctuations are obtained by inverting the line-integrated interferometry data. The local magnetic and current density fluctuations are then reconstructed using a parameterized fit of the polarimetry data. Reconstructed 2D images of density and magnetic field fluctuations in a poloidal cross section exhibit significantly different spatial structure. Combined with their relative phase, the magnetic-fluctuation-induced particle transport flux and its spatial distribution are resolved.

  18. Interferometry based multispectral photon-limited 2D and 3D integral image encryption employing the Hartley transform.

    PubMed

    Muniraj, Inbarasan; Guo, Changliang; Lee, Byung-Geun; Sheridan, John T

    2015-06-15

    We present a method of securing multispectral 3D photon-counted integral imaging (PCII) using classical Hartley Transform (HT) based encryption by employing optical interferometry. This method has the simultaneous advantages of minimizing complexity by eliminating the need for holography recording and addresses the phase sensitivity problem encountered when using digital cameras. These together with single-channel multispectral 3D data compactness, the inherent properties of the classical photon counting detection model, i.e. sparse sensing and the capability for nonlinear transformation, permits better authentication of the retrieved 3D scene at various depth cues. Furthermore, the proposed technique works for both spatially and temporally incoherent illumination. To validate the proposed technique simulations were carried out for both the 2D and 3D cases. Experimental data is processed and the results support the feasibility of the encryption method.

  19. Mapping near-field environments of plasmonic and 2D materials with photo-induced force imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tumkur, Thejaswi U.; Doiron, Chloe; Yang, Xiao; Li, Bo; Swearer, Dayne F.; Cerjan, Benjamin W.; Nordlander, Peter; Halas, Naomi J.; Ajayan, Pulickel M.; Ringe, Emilie; Thomann, Isabell

    2016-09-01

    We demonstrate the ability to map photo-induced gradient forces in materials, using a setup akin to atomic force microscopy. This technique allows for the simultaneous characterization of topographical features and optical near-fields in materials, with a high spatio-temporal resolution. We show that the near-field gradient forces can be translated onto electric fields, enabling the mapping of plasmonic hot-spots in gold nanostructures, and the resolution of sub-10 nm features in photocatalytic materials. We further show that the dispersion-sensitive nature of near-field gradient forces can be used to image and distinguish atomically thin layers of 2-D materials, with high contrast.

  20. JPEG backward compatible coding of omnidirectional images

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Upenik, Evgeniy; Ebrahimi, Touradj

    2016-09-01

    Omnidirectional image and video, also known as 360 image and 360 video, are gaining in popularity with the recent growth in availability of cameras and displays that can cope with such type of content. As omnidirectional visual content represents a larger set of information about the scene, it typically requires a much larger volume of information. Efficient compression of such content is therefore important. In this paper, we review the state of the art in compression of omnidirectional visual content, and propose a novel approach to encode omnidirectional images in such a way that they are still viewable on legacy JPEG decoders.

  1. Code-modulated interferometric imaging system using phased arrays

    NASA Astrophysics Data System (ADS)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  2. Estimation and application of 2-D scattering matrices for sparse array imaging of simulated damage in composite panels

    NASA Astrophysics Data System (ADS)

    Williams, Westin B.; Michaels, Thomas E.; Michaels, Jennifer E.

    2017-02-01

    Reliable detection of damage in composites is critically important for failure prevention in the aerospace industry since these materials are more frequently being used in high stress applications. Structural health monitoring (SHM) via guided wave sensors mounted on or embedded within a composite structure can help detect and localize damage in real-time while potentially reducing overall maintenance costs. One approach to guided wave SHM is sparse array imaging via the minimum variance algorithm, and it has been shown in prior work that incorporating expected scattering from defects of interest can improve the quality of damage localization and characterization. For this study, simulated damage in the form of attached magnets was used for estimating scattering from recorded wavefield data. Data were recorded on a circle centered at the damage location from multiple incident directions before and after the magnets were attached. Baseline subtraction is used to estimate scattering patterns for each incident direction, and these patterns are combined and interpolated to form a full 2-D scattering matrix. This matrix is then incorporated into the minimum variance imaging algorithm, and the efficacy of this scattering estimation methodology is evaluated by comparing the resulting sparse array images to those generated using simpler scattering assumptions.

  3. An explicit shape-constrained MRF-based contour evolution method for 2-D medical image segmentation.

    PubMed

    Chittajallu, Deepak R; Paragios, Nikos; Kakadiaris, Ioannis A

    2014-01-01

    Image segmentation is, in general, an ill-posed problem and additional constraints need to be imposed in order to achieve the desired segmentation result. While segmenting organs in medical images, which is the topic of this paper, a significant amount of prior knowledge about the shape, appearance, and location of the organs is available that can be used to constrain the solution space of the segmentation problem. Among the various types of prior information, the incorporation of prior information about shape, in particular, is very challenging. In this paper, we present an explicit shape-constrained MAP-MRF-based contour evolution method for the segmentation of organs in 2-D medical images. Specifically, we represent the segmentation contour explicitly as a chain of control points. We then cast the segmentation problem as a contour evolution problem, wherein the evolution of the contour is performed by iteratively solving a MAP-MRF labeling problem. The evolution of the contour is governed by three types of prior information, namely: (i) appearance prior, (ii) boundary-edgeness prior, and (iii) shape prior, each of which is incorporated as clique potentials into the MAP-MRF problem. We use the master-slave dual decomposition framework to solve the MAP-MRF labeling problem in each iteration. In our experiments, we demonstrate the application of the proposed method to the challenging problem of heart segmentation in non-contrast computed tomography data.

  4. Transverse Strains in Muscle Fascicles during Voluntary Contraction: A 2D Frequency Decomposition of B-Mode Ultrasound Images

    PubMed Central

    Wakeling, James M.

    2014-01-01

    When skeletal muscle fibres shorten, they must increase in their transverse dimensions in order to maintain a constant volume. In pennate muscle, this transverse expansion results in the fibres rotating to greater pennation angle, with a consequent reduction in their contractile velocity in a process known as gearing. Understanding the nature and extent of this transverse expansion is necessary to understand the mechanisms driving the changes in internal geometry of whole muscles during contraction. Current methodologies allow the fascicle lengths, orientations, and curvatures to be quantified, but not the transverse expansion. The purpose of this study was to develop and validate techniques for quantifying transverse strain in skeletal muscle fascicles during contraction from B-mode ultrasound images. Images were acquired from the medial and lateral gastrocnemii during cyclic contractions, enhanced using multiscale vessel enhancement filtering and the spatial frequencies resolved using 2D discrete Fourier transforms. The frequency information was resolved into the fascicle orientations that were validated against manually digitized values. The transverse fascicle strains were calculated from their wavelengths within the images. These methods showed that the transverse strain increases while the longitudinal fascicle length decreases; however, the extent of these strains was smaller than expected. PMID:25328509

  5. Automatic Measurement of Thalamic Diameter in 2D Fetal Ultrasound Brain Images using Shape Prior Constrained Regularized Level Sets.

    PubMed

    Sridar, Pradeeba; Kumar, Ashnil; Li, Changyang; Woo, Joyce; Quinton, Ann; Benzie, Ron; Peek, Michael; Feng, Dagan; Ramarathnam, Krishna Kumar; Nanan, Ralph; Kim, Jinman

    2016-06-20

    We derived an automated algorithm for accurately measuring the thalamic diameter from 2D fetal ultrasound (US) brain images. The algorithm overcomes the inherent limitations of the US image modality: non-uniform density, missing boundaries, and strong speckle noise. We introduced a 'guitar' structure that represents the negative space surrounding the thalamic regions. The guitar acts as a landmark for deriving the widest points of the thalamus even when its boundaries are not identifiable. We augmented a generalized level-set framework with a shape prior and constraints derived from statistical shape models of the guitars; this framework was used to segment US images and measure the thalamic diameter. Our segmentation method achieved a higher mean Dice similarity coefficient, Hausdorff distance, specificity and reduced contour leakage when compared to other well-established methods. The automatic thalamic diameter measurement had an inter-observer variability of -0.56±2.29 millimeters compared to manual measurement by an expert sonographer. Our method was capable of automatically estimating the thalamic diameter, with the measurement accuracy on par with clinical assessment. Our method can be used as part of computer-assisted screening tools that automatically measure the biometrics of the fetal thalamus; these biometrics are linked to neuro-developmental outcomes.

  6. Aggregating local image descriptors into compact codes.

    PubMed

    Jégou, Hervé; Perronnin, Florent; Douze, Matthijs; Sánchez, Jorge; Pérez, Patrick; Schmid, Cordelia

    2012-09-01

    This paper addresses the problem of large-scale image search. Three constraints have to be taken into account: search accuracy, efficiency, and memory usage. We first present and evaluate different ways of aggregating local image descriptors into a vector and show that the Fisher kernel achieves better performance than the reference bag-of-visual words approach for any given vector dimension. We then jointly optimize dimensionality reduction and indexing in order to obtain a precise vector comparison as well as a compact representation. The evaluation shows that the image representation can be reduced to a few dozen bytes while preserving high accuracy. Searching a 100 million image data set takes about 250 ms on one processor core.

  7. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  8. 2-D multi-frequency imaging of a tumor inclusion in homogeneous breast phantom using harmonic motion doppler imaging method.

    PubMed

    Kamali Tafreshi, Azadeh; Top, Can; Gencer, Nevzat

    2017-02-02

    Harmonic Motion Microwave Doppler Imaging (HMMDI) is a novel imaging modality to image the coupled electrical and mechanical properties of body tissues. In this paper, we used two experimental systems with different receiver configurations to obtain HMMDI images from tissue mimicking phantoms at multiple vibration frequencies between 15 Hz and 35 Hz. In the first system, we used a spectrum analyzer to obtain the Doppler data in frequency domain, while in the second one, we used a homodyne receiver that was designed to acquire time domain data. The developed phantoms mimic elastic and dielectric properties of breast fat tissue, and include a 14 mm × 9 mm cylindrical inclusion representing tumor. A focused ultrasound probe was mechanically scanned in two lateral dimensions to generate HMMDI images of the phantoms. The inclusions were resolved inside the fat phantom using both experimental setups. Image resolution increased with increasing vibration frequency. The sensitivity of the designed receiver was higher compared to the spectrum analyzer measurements. The results also showed that time domain data acquisition should be used to fully exploit the potential of the HMMDI method.

  9. Effect of image processing version on detection of non-calcification cancers in 2D digital mammography imaging

    NASA Astrophysics Data System (ADS)

    Warren, L. M.; Cooke, J.; Given-Wilson, R. M.; Wallis, M. G.; Halling-Brown, M.; Mackenzie, A.; Chakraborty, D. P.; Bosmans, H.; Dance, D. R.; Young, K. C.

    2013-03-01

    Image processing (IP) is the last step in the digital mammography imaging chain before interpretation by a radiologist. Each manufacturer has their own IP algorithm(s) and the appearance of an image after IP can vary greatly depending upon the algorithm and version used. It is unclear whether these differences can affect cancer detection. This work investigates the effect of IP on the detection of non-calcification cancers by expert observers. Digital mammography images for 190 patients were collected from two screening sites using Hologic amorphous selenium detectors. Eighty of these cases contained non-calcification cancers. The images were processed using three versions of IP from Hologic - default (full enhancement), low contrast (intermediate enhancement) and pseudo screen-film (no enhancement). Seven experienced observers inspected the images and marked the location of regions suspected to be non-calcification cancers assigning a score for likelihood of malignancy. This data was analysed using JAFROC analysis. The observers also scored the clinical interpretation of the entire case using the BSBR classification scale. This was analysed using ROC analysis. The breast density in the region surrounding each cancer and the number of times each cancer was detected were calculated. IP did not have a significant effect on the radiologists' judgment of the likelihood of malignancy of individual lesions or their clinical interpretation of the entire case. No correlation was found between number of times each cancer was detected and the density of breast tissue surrounding that cancer.

  10. Efficient coding of wavelet trees and its applications in image coding

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Yang, En-hui; Tewfik, Ahmed H.; Kieffer, John C.

    1996-02-01

    We propose in this paper a novel lossless tree coding algorithm. The technique is a direct extension of the bisection method, the simplest case of the complexity reduction method proposed recently by Kieffer and Yang, that has been used for lossless data string coding. A reduction rule is used to obtain the irreducible representation of a tree, and this irreducible tree is entropy-coded instead of the input tree itself. This reduction is reversible, and the original tree can be fully recovered from its irreducible representation. More specifically, we search for equivalent subtrees from top to bottom. When equivalent subtrees are found, a special symbol is appended to the value of the root node of the first equivalent subtree, and the root node of the second subtree is assigned to the index which points to the first subtree, an all other nodes in the second subtrees are removed. This procedure is repeated until it cannot be reduced further. This yields the irreducible tree or irreducible representation of the original tree. The proposed method can effectively remove the redundancy in an image, and results in more efficient compression. It is proved that when the tree size approaches infinity, the proposed method offers the optimal compression performance. It is generally more efficient in practice than direct coding of the input tree. The proposed method can be directly applied to code wavelet trees in non-iterative wavelet-based image coding schemes. A modified method is also proposed for coding wavelet zerotrees in embedded zerotree wavelet (EZW) image coding. Although its coding efficiency is slightly reduced, the modified version maintains exact control of bit rate and the scalability of the bit stream in EZW coding.

  11. Analysis for simplified optics coma effection on spectral image inversion of coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Liu, Yangyang; Lv, Qunbo; Li, Weiyan; Xiangli, Bin

    2015-09-01

    As a novel spectrum imaging technology was developed recent years, push-broom coded aperture spectral imaging (PCASI) has the advantages of high throughput, high SNR, high stability etc. This coded aperture spectral imaging utilizes fixed code templates and push-broom mode, which can realize the high-precision reconstruction of spatial and spectral information. But during optical lens designing, manufacturing and debugging, it is inevitably exist some minor coma errors. Even minor coma errors can reduce image quality. In this paper, we simulated the system optical coma error's influence to the quality of reconstructed image, analyzed the variant of the coded aperture in different optical coma effect, then proposed an accurate curve of image quality and optical coma quality in 255×255 size code template, which provide important references for design and development of push-broom coded aperture spectrometer.

  12. OpenHVSR: imaging the subsurface 2D/3D elastic properties through multiple HVSR modeling and inversion

    NASA Astrophysics Data System (ADS)

    Bignardi, S.; Mantovani, A.; Abu Zeid, N.

    2016-08-01

    OpenHVSR is a computer program developed in the Matlab environment, designed for the simultaneous modeling and inversion of large Horizontal-to-Vertical Spectral Ratio (HVSR or H/V) datasets in order to construct 2D/3D subsurface models (topography included). The program is designed to provide a high level of interactive experience to the user and still to be of intuitive use. It implements several effective and established tools already present in the code ModelHVSR by Herak (2008), and many novel features such as: -confidence evaluation on lateral heterogeneity -evaluation of frequency dependent single parameter impact on the misfit function -relaxation of Vp/Vs bounds to allow for water table inclusion -a new cost function formulation which include a slope dependent term for fast matching of peaks, which greatly enhances convergence in case of low quality HVSR curves inversion -capability for the user of editing the subsurface model at any time during the inversion and capability to test the changes before acceptance. In what follows, we shall present many features of the program and we shall show its capabilities on both simulated and real data. We aim to supply a powerful tool to the scientific and professional community capable of handling large sets of HSVR curves, to retrieve the most from their microtremor data within a reduced amount of time and allowing the experienced scientist the necessary flexibility to integrate into the model their own geological knowledge of the sites under investigation. This is especially desirable now that microtremor testing has become routinely used. After testing the code over different datasets, both simulated and real, we finally decided to make it available in an open source format. The program is available by contacting the authors.

  13. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  14. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  15. Coded aperture imaging for fluorescent x-rays

    SciTech Connect

    Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.

    2014-06-15

    We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.

  16. Imaging The Genetic Code of a Virus

    NASA Astrophysics Data System (ADS)

    Graham, Jenna; Link, Justin

    2013-03-01

    Atomic Force Microscopy (AFM) has allowed scientists to explore physical characteristics of nano-scale materials. However, the challenges that come with such an investigation are rarely expressed. In this research project a method was developed to image the well-studied DNA of the virus lambda phage. Through testing and integrating several sample preparations described in literature, a quality image of lambda phage DNA can be obtained. In our experiment, we developed a technique using the Veeco Autoprobe CP AFM and mica substrate with an appropriate absorption buffer of HEPES and NiCl2. This presentation will focus on the development of a procedure to image lambda phage DNA at Xavier University. The John A. Hauck Foundation and Xavier University

  17. A Cylindrical, Inner Volume Selecting 2D-T2-Prep Improves GRAPPA-Accelerated Image Quality in MRA of the Right Coronary Artery

    PubMed Central

    Coristine, Andrew J.; Yerly, Jerome; Stuber, Matthias

    2016-01-01

    Background Two-dimensional (2D) spatially selective radiofrequency (RF) pulses may be used to excite restricted volumes. By incorporating a "pencil beam" 2D pulse into a T2-Prep, one may create a "2D-T2-Prep" that combines T2-weighting with an intrinsic outer volume suppression. This may particularly benefit parallel imaging techniques, where artefacts typically originate from residual foldover signal. By suppressing foldover signal with a 2D-T2-Prep, image quality may therefore improve. We present numerical simulations, phantom and in vivo validations to address this hypothesis. Methods A 2D-T2-Prep and a conventional T2-Prep were used with GRAPPA-accelerated MRI (R = 1.6). The techniques were first compared in numerical phantoms, where per pixel maps of SNR (SNRmulti), noise, and g-factor were predicted for idealized sequences. Physical phantoms, with compartments doped to mimic blood, myocardium, fat, and coronary vasculature, were scanned with both T2-Preparation techniques to determine the actual SNRmulti and vessel sharpness. For in vivo experiments, the right coronary artery (RCA) was imaged in 10 healthy adults, using accelerations of R = 1,3, and 6, and vessel sharpness was measured for each. Results In both simulations and phantom experiments, the 2D-T2-Prep improved SNR relative to the conventional T2-Prep, by an amount that depended on both the acceleration factor and the degree of outer volume suppression. For in vivo images of the RCA, vessel sharpness improved most at higher acceleration factors, demonstrating that the 2D-T2-Prep especially benefits accelerated coronary MRA. Conclusion Suppressing outer volume signal with a 2D-T2-Prep improves image quality particularly well in GRAPPA-accelerated acquisitions in simulations, phantoms, and volunteers, demonstrating that it should be considered when performing accelerated coronary MRA. PMID:27736866

  18. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  19. Integral equation analysis and optimization of 2D layered nanolithography masks by complex images Green's function technique in TM polarization.

    PubMed

    Haghtalab, Mohammad; Faraji-Dana, Reza

    2012-05-01

    Analysis and optimization of diffraction effects in nanolithography through multilayered media with a fast and accurate field-theoretical approach is presented. The scattered field through an arbitrary two-dimensional (2D) mask pattern in multilayered media illuminated by a TM-polarized incident wave is determined by using an electric field integral equation formulation. In this formulation the electric field is represented in terms of complex images Green's functions. The method of moments is then employed to solve the resulting integral equation. In this way an accurate and computationally efficient approximate method is achieved. The accuracy of the proposed method is vindicated through comparison with direct numerical integration results. Moreover, the comparison is made between the results obtained by the proposed method and those obtained by the full-wave finite-element method. The ray tracing method is combined with the proposed method to describe the imaging process in the lithography. The simulated annealing algorithm is then employed to solve the inverse problem, i.e., to design an optimized mask pattern to improve the resolution. Two binary mask patterns under normal incident coherent illumination are designed by this method, where it is shown that the subresolution features improve the critical dimension significantly.

  20. WASI-2D: A software tool for regionally optimized analysis of imaging spectrometer data from deep and shallow waters

    NASA Astrophysics Data System (ADS)

    Gege, Peter

    2014-01-01

    An image processing software has been developed which allows quantitative analysis of multi- and hyperspectral data from oceanic, coastal and inland waters. It has been implemented into the Water Colour Simulator WASI, which is a tool for the simulation and analysis of optical properties and light field parameters of deep and shallow waters. The new module WASI-2D can import atmospherically corrected images from airborne sensors and satellite instruments in various data formats and units like remote sensing reflectance or radiance. It can be easily adapted by the user to different sensors and to optical properties of the studied area. Data analysis is done by inverse modelling using established analytical models. The bio-optical model of the water column accounts for gelbstoff (coloured dissolved organic matter, CDOM), detritus, and mixtures of up to 6 phytoplankton classes and 2 spectrally different types of suspended matter. The reflectance of the sea floor is treated as sum of up to 6 substrate types. An analytic model of downwelling irradiance allows wavelength dependent modelling of sun glint and sky glint at the water surface. The provided database covers the spectral range from 350 to 1000 nm in 1 nm intervals. It can be exchanged easily to represent the optical properties of water constituents, bottom types and the atmosphere of the studied area.

  1. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    NASA Astrophysics Data System (ADS)

    Afik, Eldad

    2015-09-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection.

  2. Robust and highly performant ring detection algorithm for 3d particle tracking using 2d microscope imaging

    PubMed Central

    Afik, Eldad

    2015-01-01

    Three-dimensional particle tracking is an essential tool in studying dynamics under the microscope, namely, fluid dynamics in microfluidic devices, bacteria taxis, cellular trafficking. The 3d position can be determined using 2d imaging alone by measuring the diffraction rings generated by an out-of-focus fluorescent particle, imaged on a single camera. Here I present a ring detection algorithm exhibiting a high detection rate, which is robust to the challenges arising from ring occlusion, inclusions and overlaps, and allows resolving particles even when near to each other. It is capable of real time analysis thanks to its high performance and low memory footprint. The proposed algorithm, an offspring of the circle Hough transform, addresses the need to efficiently trace the trajectories of many particles concurrently, when their number in not necessarily fixed, by solving a classification problem, and overcomes the challenges of finding local maxima in the complex parameter space which results from ring clusters and noise. Several algorithmic concepts introduced here can be advantageous in other cases, particularly when dealing with noisy and sparse data. The implementation is based on open-source and cross-platform software packages only, making it easy to distribute and modify. It is implemented in a microfluidic experiment allowing real-time multi-particle tracking at 70 Hz, achieving a detection rate which exceeds 94% and only 1% false-detection. PMID:26329642

  3. Imaging Agonist-Induced D2/D3 Receptor Desensitization and Internalization In Vivo with PET/fMRI.

    PubMed

    Sander, Christin Y; Hooker, Jacob M; Catana, Ciprian; Rosen, Bruce R; Mandeville, Joseph B

    2016-04-01

    This study investigated the dynamics of dopamine receptor desensitization and internalization, thereby proposing a new technique for non-invasive, in vivo measurements of receptor adaptations. The D2/D3 agonist quinpirole, which induces receptor internalization in vitro, was administered at graded doses in non-human primates while imaging with simultaneous positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). A pronounced temporal divergence between receptor occupancy and fMRI signal was observed: occupancy remained elevated while fMRI responded transiently. Analogous experiments with an antagonist (prochlorperazine) and a lower-affinity agonist (ropinirole) exhibited reduced temporal dissociation between occupancy and function, consistent with a mechanism of desensitization and internalization that depends upon drug efficacy and affinity. We postulated a model that incorporates internalization into a neurovascular-coupling relationship. This model yielded in vivo desensitization/internalization rates (0.2/min for quinpirole) consistent with published in vitro measurements. Overall, these results suggest that simultaneous PET/fMRI enables characterization of dynamic neuroreceptor adaptations in vivo, and may offer a first non-invasive method for assessing receptor desensitization and internalization.

  4. Quantification of tracer plume transport parameters in 2D saturated porous media by cross-borehole ERT imaging

    NASA Astrophysics Data System (ADS)

    Lekmine, G.; Auradou, H.; Pessel, M.; Rayner, J. L.

    2017-04-01

    Cross-borehole ERT imaging was tested to quantify the average velocity and transport parameters of tracer plumes in saturated porous media. Seven tracer tests were performed at different flow rates and monitored by either a vertical or horizontal dipole-dipole ERT sequence. These sequences were tested to reconstruct the shape and temporally follow the spread of the tracer plumes through a background regularization procedure. Data sets were inverted with the same inversion parameters and 2D model sections of resistivity ratios were converted to tracer concentrations. Both array types provided an accurate estimation of the average pore velocity vz. The total mass Mtot recovered was always overestimated by the horizontal dipole-dipole and underestimated by the vertical dipole-dipole. The vertical dipole-dipole was however reliable to quantify the longitudinal dispersivity λz, while the horizontal dipole-dipole returned better estimation for the transverse component λx. λ and Mtot were mainly influenced by the 2D distribution of the cumulated electrical sensitivity and the Shadow Effects induced by the third dimension. The size reduction of the edge of the plume was also related to the inability of the inversion process to reconstruct sharp resistivity contrasts at the interface. Smoothing was counterbalanced by a non-realistic rise of the ERT concentrations around the centre of mass returning overpredicted total masses. A sensitivity analysis on the cementation factor m and the porosity ϕ demonstrated that a change in one of these parameters by 8% involved non negligible variations by 30 and 40% of the dispersion coefficients and mass recovery.

  5. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  6. Robust initialization of 2D-3D image registration using the projection-slice theorem and phase correlation

    SciTech Connect

    Bom, M. J. van der; Bartels, L. W.; Gounis, M. J.; Homan, R.; Timmer, J.; Viergever, M. A.; Pluim, J. P. W.

    2010-04-15

    Purpose: The image registration literature comprises many methods for 2D-3D registration for which accuracy has been established in a variety of applications. However, clinical application is limited by a small capture range. Initial offsets outside the capture range of a registration method will not converge to a successful registration. Previously reported capture ranges, defined as the 95% success range, are in the order of 4-11 mm mean target registration error. In this article, a relatively computationally inexpensive and robust estimation method is proposed with the objective to enlarge the capture range. Methods: The method uses the projection-slice theorem in combination with phase correlation in order to estimate the transform parameters, which provides an initialization of the subsequent registration procedure. Results: The feasibility of the method was evaluated by experiments using digitally reconstructed radiographs generated from in vivo 3D-RX data. With these experiments it was shown that the projection-slice theorem provides successful estimates of the rotational transform parameters for perspective projections and in case of translational offsets. The method was further tested on ex vivo ovine x-ray data. In 95% of the cases, the method yielded successful estimates for initial mean target registration errors up to 19.5 mm. Finally, the method was evaluated as an initialization method for an intensity-based 2D-3D registration method. The uninitialized and initialized registration experiments had success rates of 28.8% and 68.6%, respectively. Conclusions: The authors have shown that the initialization method based on the projection-slice theorem and phase correlation yields adequate initializations for existing registration methods, thereby substantially enlarging the capture range of these methods.

  7. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy.

  8. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  9. A model of PSF estimation for coded mask infrared imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Ao; Jin, Jie; Wang, Qing; Yang, Jingyu; Sun, Yi

    2014-11-01

    The point spread function (PSF) of imaging system with coded mask is generally acquired by practical measure- ment with calibration light source. As the thermal radiation of coded masks are relatively severe than it is in visible imaging systems, which buries the modulation effects of the mask pattern, it is difficult to estimate and evaluate the performance of mask pattern from measured results. To tackle this problem, a model for infrared imaging systems with masks is presented in this paper. The model is composed with two functional components, the coded mask imaging with ideal focused lenses and the imperfection imaging with practical lenses. Ignoring the thermal radiation, the systems PSF can then be represented by a convolution of the diffraction pattern of mask with the PSF of practical lenses. To evaluate performances of different mask patterns, a set of criterion are designed according to different imaging and recovery methods. Furthermore, imaging results with inclined plane waves are analyzed to achieve the variation of PSF within the view field. The influence of mask cell size is also analyzed to control the diffraction pattern. Numerical results show that mask pattern for direct imaging systems should have more random structures, while more periodic structures are needed in system with image reconstruction. By adjusting the combination of random and periodic arrangement, desired diffraction pattern can be achieved.

  10. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  11. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Stayman, J Webster; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2016-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with `success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  12. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  13. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation.

    PubMed

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2013-12-07

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  14. Spectrum simulation of rough and nanostructured targets from their 2D and 3D image by Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schiettekatte, François; Chicoine, Martin

    2016-03-01

    Corteo is a program that implements Monte Carlo (MC) method to simulate ion beam analysis (IBA) spectra of several techniques by following the ions trajectory until a sufficiently large fraction of them reach the detector to generate a spectrum. Hence, it fully accounts for effects such as multiple scattering (MS). Here, a version of Corteo is presented where the target can be a 2D or 3D image. This image can be derived from micrographs where the different compounds are identified, therefore bringing extra information into the solution of an IBA spectrum, and potentially significantly constraining the solution. The image intrinsically includes many details such as the actual surface or interfacial roughness, or actual nanostructures shape and distribution. This can for example lead to the unambiguous identification of structures stoichiometry in a layer, or at least to better constraints on their composition. Because MC computes in details the trajectory of the ions, it simulates accurately many of its aspects such as ions coming back into the target after leaving it (re-entry), as well as going through a variety of nanostructures shapes and orientations. We show how, for example, as the ions angle of incidence becomes shallower than the inclination distribution of a rough surface, this process tends to make the effective roughness smaller in a comparable 1D simulation (i.e. narrower thickness distribution in a comparable slab simulation). Also, in ordered nanostructures, target re-entry can lead to replications of a peak in a spectrum. In addition, bitmap description of the target can be used to simulate depth profiles such as those resulting from ion implantation, diffusion, and intermixing. Other improvements to Corteo include the possibility to interpolate the cross-section in angle-energy tables, and the generation of energy-depth maps.

  15. Efficient block error concealment code for image and video transmission

    NASA Astrophysics Data System (ADS)

    Min, Jungki; Chan, Andrew K.

    1999-05-01

    Image and video compression standards such as JPEG, MPEG, H.263 are highly sensitive to error during transmission. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization produces the worst image degradation. Even an error of a single bit in block synchronization may result in data to be placed in wrong positions that is caused by spatial shifts. Our proposed efficient block error concealment code (EBECC) virtually guarantees block synchronization and it improves coding efficiency by several hundred folds over the error resilient entropy code (EREC), proposed by N. G. Kingsbury and D. W. Redmill, depending on the image format and size. In addition, the EBECC produces slightly better resolution on the reconstructed images or video frames than those from the EREC. Another important advantage of the EBECC is that it does not require redundancy contrasting to the EREC that requires 2-3 percent of redundancy. Our preliminary results show the EBECC is 240 times faster than EREC for encoding and 330 times for decoding based on the CIF format of H.263 video coding standard. The EBECC can be used on most of the popular image and video compression schemes such as JPEG, MPEG, and H.263. Additionally, it is especially useful to wireless networks in which the percentage of image and video data is high.

  16. Barker-coded excitation in ophthalmological ultrasound imaging

    PubMed Central

    Zhou, Sheng; Wang, Xiao-Chun; Yang, Jun; Ji, Jian-Jun; Wang, Yan-Qun

    2014-01-01

    High-frequency ultrasound is an attractive means to obtain fine-resolution images of biological tissues for ophthalmologic imaging. To solve the tradeoff between axial resolution and detection depth, existing in the conventional single-pulse excitation, this study develops a new method which uses 13-bit Barker-coded excitation and a mismatched filter for high-frequency ophthalmologic imaging. A novel imaging platform has been designed after trying out various encoding methods. The simulation and experiment result show that the mismatched filter can achieve a much higher out signal main to side lobe which is 9.7 times of the matched one. The coded excitation method has significant advantages over the single-pulse excitation system in terms of a lower MI, a higher resolution, and a deeper detection depth, which improve the quality of ophthalmic tissue imaging. Therefore, this method has great values in scientific application and medical market. PMID:25356093

  17. Analysis of LAPAN-IPB image lossless compression using differential pulse code modulation and huffman coding

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Permala, R.

    2017-01-01

    LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.

  18. Image coding by way of wavelets

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1993-01-01

    The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.

  19. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  20. Coded access optical sensor (CAOS) imager and applications

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2016-04-01

    Starting in 2001, we proposed and extensively demonstrated (using a DMD: Digital Micromirror Device) an agile pixel Spatial Light Modulator (SLM)-based optical imager based on single pixel photo-detection (also called a single pixel camera) that is suited for operations with both coherent and incoherent light across broad spectral bands. This imager design operates with the agile pixels programmed in a limited SNR operations starring time-multiplexed mode where acquisition of image irradiance (i.e., intensity) data is done one agile pixel at a time across the SLM plane where the incident image radiation is present. Motivated by modern day advances in RF wireless, optical wired communications and electronic signal processing technologies and using our prior-art SLM-based optical imager design, described using a surprisingly simple approach is a new imager design called Coded Access Optical Sensor (CAOS) that has the ability to alleviate some of the key prior imager fundamental limitations. The agile pixel in the CAOS imager can operate in different time-frequency coding modes like Frequency Division Multiple Access (FDMA), Code-Division Multiple Access (CDMA), and Time Division Multiple Access (TDMA). Data from a first CAOS camera demonstration is described along with novel designs of CAOS-based optical instruments for various applications.

  1. Resolution scalable image coding with reversible cellular automata.

    PubMed

    Cappellari, Lorenzo; Milani, Simone; Cruz-Reyes, Carlos; Calvagno, Giancarlo

    2011-05-01

    In a resolution scalable image coding algorithm, a multiresolution representation of the data is often obtained using a linear filter bank. Reversible cellular automata have been recently proposed as simpler, nonlinear filter banks that produce a similar representation. The original image is decomposed into four subbands, such that one of them retains most of the features of the original image at a reduced scale. In this paper, we discuss the utilization of reversible cellular automata and arithmetic coding for scalable compression of binary and grayscale images. In the binary case, the proposed algorithm that uses simple local rules compares well with the JBIG compression standard, in particular for images where the foreground is made of a simple connected region. For complex images, more efficient local rules based upon the lifting principle have been designed. They provide compression performances very close to or even better than JBIG, depending upon the image characteristics. In the grayscale case, and in particular for smooth images such as depth maps, the proposed algorithm outperforms both the JBIG and the JPEG2000 standards under most coding conditions.

  2. A Novel Assessment of Various Bio-Imaging Methods for Lung Tumor Detection and Treatment by using 4-D and 2-D CT Images

    PubMed Central

    Judice A., Antony; Geetha, Dr. K. Parimala

    2013-01-01

    Lung Cancer is known as one of the most difficult cancer to cure, and the number of deaths that it causes generally increasing. A detection of the Lung Cancer in its early stage can be helpful for Medical treatment to limit the danger, but it is a challenging problem due to Cancer cell structure. Interpretation of Medical image is often difficult and time consuming, even for the experienced Physicians. The aid of image analysis Based on machine learning can make this process easier. This paper describes fully Automatic Decision Support system for Lung Cancer diagnostic from CT Lung images. Most traditional medical diagnosis systems are founded on huge quantity of training data and takes long processing time. However, on the occasion that very little volume of data is available, the traditional diagnosis systems derive defects such as larger error, Time complexity. Focused on the solution to this problem, a Medical Diagnosis System based on Hidden Markov Model (HMM) is presented. In this paper we describe a pre-processing stage involving some Noise removal techniques help to solve this problem, we preprocess an images (by Mean Error Square Filtering and Histogram analysis)obtained after scanning the Lung CT images. Secondly separate the lung areas from an image by a segmentation process (by Thresholding and region growing techniques). Finally we developed HMM for the classification of Cancer Nodule. Results are checked for 2D and 4D CT images. This automation process reduces the time complexity and increases the diagnosis confidence. PMID:23847454

  3. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  4. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  5. Pyramidal Image-Processing Code For Hexagonal Grid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1990-01-01

    Algorithm based on processing of information on intensities of picture elements arranged in regular hexagonal grid. Called "image pyramid" because image information at each processing level arranged in hexagonal grid having one-seventh number of picture elements of next lower processing level, each picture element derived from hexagonal set of seven nearest-neighbor picture elements in next lower level. At lowest level, fine-resolution of elements of original image. Designed to have some properties of image-coding scheme of primate visual cortex.

  6. New coding concept for fast ultrasound imaging using pulse trains

    NASA Astrophysics Data System (ADS)

    Misaridis, Thanasis; Jensen, Joergen A.

    2002-04-01

    Frame rate in ultrasound imaging can be increased by simultaneous transmission of multiple beams using coded waveforms. However, the achievable degree of orthogonality among coded waveforms is limited in ultrasound, and the image quality degrades unacceptably due to interbeam interference. In this paper, an alternative combined time-space coding approach is undertaken. In the new method all transducer elements are excited with short pulses and the high time-bandwidth (TB) product waveforms are generated acoustically. Each element transmits a short pulse spherical wave with a constant transmit delay from element to element, long enough to assure no pulse overlapping for all depths in the image. Frequency shift keying is used for per element coding. The received signals from a point scatterer are staggered pulse trains which are beamformed for all beam directions and further processed with a bank of matched filters (one for each beam direction). Filtering compresses the pulse train to a single pulse at the scatterer position with a number of spike axial sidelobes. Cancellation of the ambiguity spikes is done by applying additional phase modulation from one emission to the next and summing every two successive images. Simulation results presented for QLFM and Costas spatial encoding schemes show that the proposed method can yield images with range sidelobes down to -45 dB using only two emissions.

  7. Simulating Dynamic Stall in a 2D VAWT: Modeling strategy, verification and validation with Particle Image Velocimetry data

    NASA Astrophysics Data System (ADS)

    Simão Ferreira, C. J.; Bijl, H.; van Bussel, G.; van Kuik, G.

    2007-07-01

    The implementation of wind energy conversion systems in the built environment renewed the interest and the research on Vertical Axis Wind Turbines (VAWT), which in this application present several advantages over Horizontal Axis Wind Turbines (HAWT). The VAWT has an inherent unsteady aerodynamic behavior due to the variation of angle of attack with the angle of rotation, perceived velocity and consequentially Reynolds number. The phenomenon of dynamic stall is then an intrinsic effect of the operation of a Vertical Axis Wind Turbine at low tip speed ratios, having a significant impact in both loads and power. The complexity of the unsteady aerodynamics of the VAWT makes it extremely attractive to be analyzed using Computational Fluid Dynamics (CFD) models, where an approximation of the continuity and momentum equations of the Navier-Stokes equations set is solved. The complexity of the problem and the need for new design approaches for VAWT for the built environment has driven the authors of this work to focus the research of CFD modeling of VAWT on: •comparing the results between commonly used turbulence models: URANS (Spalart-Allmaras and k-epsilon) and large eddy models (Large Eddy Simulation and Detached Eddy Simulation) •verifying the sensitivity of the model to its grid refinement (space and time), •evaluating the suitability of using Particle Image Velocimetry (PIV) experimental data for model validation. The 2D model created represents the middle section of a single bladed VAWT with infinite aspect ratio. The model simulates the experimental work of flow field measurement using Particle Image Velocimetry by Simão Ferreira et al for a single bladed VAWT. The results show the suitability of the PIV data for the validation of the model, the need for accurate simulation of the large eddies and the sensitivity of the model to grid refinement.

  8. Watching Silica's Dance: Imaging the Structure and Dynamics of the Atomic (Re-) Arrangements in 2D Glass

    NASA Astrophysics Data System (ADS)

    Muller, David

    2014-03-01

    Even though glasses are almost ubiquitous--in our windows, on our iPhones, even on our faces--they are also mysterious. Because glasses are notoriously difficult to study, basic questions like: ``How are the atoms arranged? Where and how do glasses break?'' are still under contention. We use aberration corrected transmission electron microscopy (TEM) to image the atoms in a new two-dimensional phase of silica glass - freestanding it becomes the world's thinnest pane of glass at only 3-atoms thick, and take a unique look into these questions. Using atom-by-atom imaging and spectroscopy, we are able to reconstruct the full structure and bonding of this 2D glass and identify it as a bi-tetrahedral layer of SiO2. Our images also strikingly resemble Zachariasen's original cartoon models of glasses, drawn in 1932. As such, our work realizes an 80-year-old vision for easily understandable glassy systems and introduces promising methods to test theoretical predictions against experimental data. We image atoms in the disordered solid and track their motions in response to local strain. We directly obtain ring statistics and pair distribution functions that span short-, medium-, and long-range order, and test these against long-standing theoretical predictions of glass structure and dynamics. We use the electron beam to excite atomic rearrangements, producing surprisingly rich and beautiful videos of how a glass bends and breaks, as well as the exchange of atoms at a solid/liquid interface. Detailed analyses of these videos reveal a complex dance of elastic and plastic deformations, phase transitions, and their interplay. These examples illustrate the wide-ranging and fundamental materials physics that can now be studied at atomic-resolution via transmission electron microscopy of two-dimensional glasses. Work in collaboration with: S. Kurasch, U. Kaiser, R. Hovden, Q. Mao, J. Kotakoski, J. S. Alden, A. Shekhawat, A. A. Alemi, J. P. Sethna, P. L. McEuen, A.V. Krasheninnikov

  9. Individual Recognition in Domestic Cattle (Bos taurus): Evidence from 2D-Images of Heads from Different Breeds

    PubMed Central

    Coulon, Marjorie; Deputte, Bertrand L.; Heyman, Yvan; Baudoin, Claude

    2009-01-01

    Background In order to maintain cohesion of groups, social animals need to process social information efficiently. Visual individual recognition, which is distinguished from mere visual discrimination, has been studied in only few mammalian species. In addition, most previous studies used either a small number of subjects or a few various views as test stimuli. Dairy cattle, as a domestic species allow the testing of a good sample size and provide a large variety of test stimuli due to the morphological diversity of breeds. Hence cattle are a suitable model for studying individual visual recognition. This study demonstrates that cattle display visual individual recognition and shows the effect of both familiarity and coat diversity in discrimination. Methodology/Principal Findings We tested whether 8 Prim'Holstein heifers could recognize 2D-images of heads of one cow (face, profiles, ¾ views) from those of other cows. Experiments were based on a simultaneous discrimination paradigm through instrumental conditioning using food rewards. In Experiment 1, all images represented familiar cows (belonging to the same social group) from the Prim'Holstein breed. In Experiments 2, 3 and 4, images were from unfamiliar (unknown) individuals either from the same breed or other breeds. All heifers displayed individual recognition of familiar and unfamiliar individuals from their own breed. Subjects reached criterion sooner when recognizing a familiar individual than when recognizing an unfamiliar one (Exp 1: 3.1±0.7 vs. Exp 2: 5.2±1.2 sessions; Z = 1.99, N = 8, P = 0.046). In addition almost all subjects recognized unknown individuals from different breeds, however with greater difficulty. Conclusions/Significance Our results demonstrated that cattle have efficient individual recognition based on categorization capacities. Social familiarity improved their performance. The recognition of individuals with very different coat characteristics from the subjects was

  10. MR imaging features of idiopathic thoracic spinal cord herniations using combined 3D-fiesta and 2D-PC Cine techniques.

    PubMed

    Ferré, J C; Carsin-Nicol, B; Hamlat, A; Carsin, M; Morandi, X

    2005-03-01

    Idiopathic thoracic spinal cord herniation (TISCH) is a rare cause of surgically treatable progressive myelopathy. The authors report 3 cases of TISCH diagnosed based on conventional T1- and T2-weighted Spin-Echo (SE) MR images in one case, and T1- and T2-weighted SE images combined with 3D-FIESTA (Fast Imaging Employing Steady state Acquisition) and 2D-Phase-Contrast Cine MR imaging in 2 cases. Conventional MRI findings usually provided the diagnosis. 3D-FIESTA images confirmed it, showing the herniated cord in the ventral epidural space. Moreover, in combination with 2D-Phase Contrast cine technique, it was a sensitive method to for the detection of associated pre- or postoperative cerebrospinal fluid spaces abnormalities.

  11. Noninvasive real-time 2D imaging of temperature distribution during the plastic pellet cooling process by using electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Hirose, Yusuke; Sapkota, Achyut; Sugawara, Michiko; Takei, Masahiro

    2016-01-01

    This study has launched a concept to image a real-time 2D temperature distribution noninvasively by a combination of the electrical capacitance tomography (ECT) technique and a permittivity-temperature calibration equation for the plastic pellet cooling process. The concept has two steps, which are the relative permittivity calculation from the measured capacitance among the many electrodes by the ECT technique, and the temperature distribution imaging from the relative permittivity by the permittivity-temperature calibration equation. An ECT sensor with 12 electrodes was designed to image the cross-sectional temperature distribution during the polymethyl methacrylate pellets cooling process. The images of temperature distribution were successfully reconstructed from the relative permittivity distribution at every time step during the process. The images reasonably indicate the temperature diffusion in a 2D space and time within a 0.0065 and 0.0175 time-dependent temperature deviation, as compared to an analytical thermal conductance simulation and thermocouple measurement.

  12. A rapid and efficient 2D/3D nuclear segmentation method for analysis of early mouse embryo and stem cell image data.

    PubMed

    Lou, Xinghua; Kang, Minjung; Xenopoulos, Panagiotis; Muñoz-Descalzo, Silvia; Hadjantonakis, Anna-Katerina

    2014-03-11

    Segmentation is a fundamental problem that dominates the success of microscopic image analysis. In almost 25 years of cell detection software development, there is still no single piece of commercial software that works well in practice when applied to early mouse embryo or stem cell image data. To address this need, we developed MINS (modular interactive nuclear segmentation) as a MATLAB/C++-based segmentation tool tailored for counting cells and fluorescent intensity measurements of 2D and 3D image data. Our aim was to develop a tool that is accurate and efficient yet straightforward and user friendly. The MINS pipeline comprises three major cascaded modules: detection, segmentation, and cell position classification. An extensive evaluation of MINS on both 2D and 3D images, and comparison to related tools, reveals improvements in segmentation accuracy and usability. Thus, its accuracy and ease of use will allow MINS to be implemented for routine single-cell-level image analyses.

  13. Sparse matrix beamforming and image reconstruction for 2-D HIFU monitoring using harmonic motion imaging for focused ultrasound (HMIFU) with in vitro validation.

    PubMed

    Hou, Gary Y; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E

    2014-11-01

    Harmonic motion imaging for focused ultrasound (HMIFU) utilizes an amplitude-modulated HIFU beam to induce a localized focal oscillatory motion simultaneously estimated. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system. A single divergent transmit beam was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface with frame rates up to 15 Hz, a 100-fold increase compared to conventional CPU-based processing. The real-time feedback rate does not require interrupting the HIFU treatment. Results in phantom experiments showed reproducible HMI images and monitoring of 22 in vitro HIFU treatments using the new 2-D system demonstrated reproducible displacement imaging, and monitoring of 22 in vitro HIFU treatments using the new 2-D system showed a consistent average focal displacement decrease of 46.7 ±14.6% during lesion formation. Complementary focal temperature monitoring also indicated an average rate of displacement increase and decrease with focal temperature at 0.84±1.15%/(°)C, and 2.03±0.93%/(°)C , respectively. These results reinforce the HMIFU capability of estimating and monitoring stiffness related changes in real time. Current ongoing studies include clinical translation of the presented system for monitoring of HIFU treatment for breast and pancreatic tumor applications.

  14. Sparse matrix beamforming and image reconstruction for real-time 2D HIFU monitoring using Harmonic Motion Imaging for Focused Ultrasound (HMIFU) with in vitro validation

    PubMed Central

    Hou, Gary Y.; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E.

    2015-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed High-Intensity Focused Ultrasound (HIFU) treatment monitoring method. HMIFU utilizes an Amplitude-Modulated (fAM = 25 Hz) HIFU beam to induce a localized focal oscillatory motion, which is simultaneously estimated and imaged by confocally-aligned imaging transducer. HMIFU feasibilities have been previously shown in silico, in vitro, and in vivo in 1-D or 2-D monitoring of HIFU treatment. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system composed of a 93-element HIFU transducer (fcenter = 4.5MHz) and coaxially-aligned 64-element phased array (fcenter = 2.5MHz) for displacement excitation and motion estimation, respectively. A single transmit beam with divergent beam transmit was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface. The present work developed and implemented a sparse matrix beamforming onto a fully-integrated, clinically relevant system, which can stream displacement images up to 15 Hz using a GPU-based processing, an increase of 100 fold in rate of streaming displacement images compared to conventional CPU-based conventional beamforming and reconstruction processing. The achieved feedback rate is also currently the fastest and only approach that does not require interrupting the HIFU treatment amongst the acoustic radiation force based HIFU imaging techniques. Results in phantom experiments showed reproducible displacement imaging, and monitoring of twenty two in vitro HIFU treatments using the new 2D system showed a

  15. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration.

    PubMed

    Kim, Jinkoo; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho

    2005-04-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration.

  16. Optimised Post-Exposure Image Sharpening Code for L3-CCD Detectors

    SciTech Connect

    Harding, Leon K.; Butler, Raymond F.; Redfern, R. Michael; Sheehan, Brendan J.; McDonald, James

    2008-02-22

    As light from celestial bodies traverses Earth's atmosphere, the wavefronts are distorted by atmospheric turbulence, thereby lowering the angular resolution of ground-based imaging. Rapid time-series imaging enables Post-Exposure Image Sharpening (PEIS) techniques, which employ shift-and-add frame registration to remove the tip-tilt component of the wavefront error--as well as telescope wobble, thus benefiting all observations. Further resolution gains are possible by selecting only frames with the best instantaneous seeing--a technique sometimes calling 'Lucky Imaging'. We implemented these techniques in the 1990s, with the TRIFFID imaging photon-counting camera, and its associated data reduction software. The software was originally written for time-tagged photon-list data formats, recorded by detectors such as the MAMA. This paper describes our deep re-structuring of the software to handle the 2-d FITS images produced by Low Light Level CCD (L3-CCD) cameras, which have sufficient time-series resolution (>30 Hz) for PEIS. As before, our code can perform straight frame co-addition, use composite reference stars, perform PEIS under several different algorithms to determine the tip/tilt shifts, store 'quality' and shift information for each frame, perform frame selection, and generate exposure-maps for photometric correction. In addition, new code modules apply all 'static' calibrations (bias subtraction, dark subtraction and flat-fielding) to the frames immediately prior to the other algorithms. A unique feature of our PEIS/Lucky Imaging code is the use of bidirectional wiener-filtering. Coupled with the far higher sensitivity of the L3-CCD over the previous TRIFFID detectors, much fainter reference stars and much narrower time windows can be used.

  17. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  18. Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-02-01

    Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.

  19. Hydrodynamic study of freely swimming shark fish propulsion for marine vehicles using 2D particle image velocimetry.

    PubMed

    Babu, Mannam Naga Praveen; Mallikarjuna, J M; Krishnankutty, P

    Two-dimensional velocity fields around a freely swimming freshwater black shark fish in longitudinal (XZ) plane and transverse (YZ) plane are measured using digital particle image velocimetry (DPIV). By transferring momentum to the fluid, fishes generate thrust. Thrust is generated not only by its caudal fin, but also using pectoral and anal fins, the contribution of which depends on the fish's morphology and swimming movements. These fins also act as roll and pitch stabilizers for the swimming fish. In this paper, studies are performed on the flow induced by fins of freely swimming undulatory carangiform swimming fish (freshwater black shark, L = 26 cm) by an experimental hydrodynamic approach based on quantitative flow visualization technique. We used 2D PIV to visualize water flow pattern in the wake of the caudal, pectoral and anal fins of swimming fish at a speed of 0.5-1.5 times of body length per second. The kinematic analysis and pressure distribution of carangiform fish are presented here. The fish body and fin undulations create circular flow patterns (vortices) that travel along with the body waves and change the flow around its tail to increase the swimming efficiency. The wake of different fins of the swimming fish consists of two counter-rotating vortices about the mean path of fish motion. These wakes resemble like reverse von Karman vortex street which is nothing but a thrust-producing wake. The velocity vectors around a C-start (a straight swimming fish bends into C-shape) maneuvering fish are also discussed in this paper. Studying flows around flapping fins will contribute to design of bioinspired propulsors for marine vehicles.

  20. 2D segmentation of intervertebral discs and its degree of degeneration from T2-weighted magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Castro-Mateos, Isaac; Pozo, José Maria; Lazary, Aron; Frangi, Alejandro F.

    2014-03-01

    Low back pain (LBP) is a disorder suffered by a large population around the world. A key factor causing this illness is Intervertebral Disc (IVD) degeneration, whose early diagnosis could help in preventing this widespread condition. Clinicians base their diagnosis on visual inspection of 2D slices of Magnetic Resonance (MR) images, which is subject to large interobserver variability. In this work, an automatic classification method is presented, which provides the Pfirrmann degree of degeneration from a mid-sagittal MR slice. The proposed method utilizes Active Contour Models, with a new geometrical energy, to achieve an initial segmentation, which is further improved using fuzzy C-means. Then, IVDs are classified according to their degree of degeneration. This classification is attained by employing Adaboost on five specific features: the mean and the variance of the probability map of the nucleus using two different approaches and the eccentricity of the fitting ellipse to the contour of the IVD. The classification method was evaluated using a cohort of 150 intervertebral discs assessed by three experts, resulting in a mean specificity (93%) and sensitivity (83%) similar to the one provided by every expert with respect to the most voted value. The segmentation accuracy was evaluated using the Dice Similarity Index (DSI) and Root Mean Square Error (RMSE) of the point-to-contour distance. The mean DSI ± 2 standard deviation was 91:7% ±5:6%, the mean RMSE was 0:82mm and the 95 percentile was 1:36mm. These results were found accurate when compared to the state-of-the-art.

  1. Sparse representation-based image restoration via nonlocal supervised coding

    NASA Astrophysics Data System (ADS)

    Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng

    2016-10-01

    Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.

  2. Image coding based on energy-sorted wavelet packets

    NASA Astrophysics Data System (ADS)

    Kong, Lin-Wen; Lay, Kuen-Tsair

    1995-04-01

    The discrete wavelet transform performs multiresolution analysis, which effectively decomposes a digital image into components with different degrees of details. In practice, it is usually implemented in the form of filter banks. If the filter banks are cascaded and both the low-pass and the high-pass components are further decomposed, a wavelet packet is obtained. The coefficients of the wavelet packet effectively represent subimages in different resolution levels. In the energy-sorted wavelet- packet decomposition, all subimages in the packet are then sorted according to their energies. The most important subimages, as measured by the energy, are preserved and coded. By investigating the histogram of each subimage, it is found that the pixel values are well modelled by the Laplacian distribution. Therefore, the Laplacian quantization is applied to quantized the subimages. Experimental results show that the image coding scheme based on wavelet packets achieves high compression ratio while preserving satisfactory image quality.

  3. Study on scalable coding algorithm for medical image.

    PubMed

    Hongxin, Chen; Zhengguang, Liu; Hongwei, Zhang

    2005-01-01

    According to the characteristics of medical image and wavelet transform, a scalable coding algorithm is presented, which can be used in image transmission by network. Wavelet transform makes up for the weakness of DCT transform and it is similar to the human visual system. The second generation of wavelet transform, the lifting scheme, can be completed by integer form, which is divided into several steps, and they can be realized by calculation form integer to integer. Lifting scheme can simplify the computing process and increase transform precision. According to the property of wavelet sub-bands, wavelet coefficients are organized on the basis of the sequence of their importance, so code stream is formed progressively and it is scalable in resolution. Experimental results show that the algorithm can be used effectively in medical image compression and suitable to long-distance browse.

  4. Validity of computational hemodynamics in human arteries based on 3D time-of-flight MR angiography and 2D electrocardiogram gated phase contrast images

    NASA Astrophysics Data System (ADS)

    Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye

    2015-11-01

    In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.

  5. Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect

    SciTech Connect

    Frary, R.; Louie, J.; Pullammanappallil, S.; Eisses, A.

    2016-08-01

    Roxanna Frary, John N. Louie, Sathish Pullammanappallil, Amy Eisses, 2011, Preliminary 3d depth migration of a network of 2d seismic lines for fault imaging at a Pyramid Lake, Nevada geothermal prospect: presented at American Geophysical Union Fall Meeting, San Francisco, Dec. 5-9, abstract T13G-07.

  6. LineCast: line-based distributed coding and transmission for broadcasting satellite images.

    PubMed

    Wu, Feng; Peng, Xiulian; Xu, Jizheng

    2014-03-01

    In this paper, we propose a novel coding and transmission scheme, called LineCast, for broadcasting satellite images to a large number of receivers. The proposed LineCast matches perfectly with the line scanning cameras that are widely adopted in orbit satellites to capture high-resolution images. On the sender side, each captured line is immediately compressed by a transform-domain scalar modulo quantization. Without syndrome coding, the transmission power is directly allocated to quantized coefficients by scaling the coefficients according to their distributions. Finally, the scaled coefficients are transmitted over a dense constellation. This line-based distributed scheme features low delay, low memory cost, and low complexity. On the receiver side, our proposed line-based prediction is used to generate side information from previously decoded lines, which fully utilizes the correlation among lines. The quantized coefficients are decoded by the linear least square estimator from the received data. The image line is then reconstructed by the scalar modulo dequantization using the generated side information. Since there is neither syndrome coding nor channel coding, the proposed LineCast can make a large number of receivers reach the qualities matching their channel conditions. Our theoretical analysis shows that the proposed LineCast can achieve Shannon's optimum performance by using a high-dimensional modulo-lattice quantization. Experiments on satellite images demonstrate that it achieves up to 1.9-dB gain over the state-of-the-art 2D broadcasting scheme and a gain of more than 5 dB over JPEG 2000 with forward error correction.

  7. Computer-aided 2D and 3D quantification of human stem cell fate from in vitro samples using Volocity high performance image analysis software.

    PubMed

    Piltti, Katja M; Haus, Daniel L; Do, Eileen; Perez, Harvey; Anderson, A J; Cummings, B J

    2011-11-01

    Accurate automated cell fate analysis of immunostained human stem cells from 2- and 3-dimensional (2D-3D) images would improve efficiency in the field of stem cell research. Development of an accurate and precise tool that reduces variability and the time needed for human stem cell fate analysis will improve productivity and interpretability of the data across research groups. In this study, we have created protocols for high performance image analysis software Volocity® to classify and quantify cytoplasmic and nuclear cell fate markers from 2D-3D images of human neural stem cells after in vitro differentiation. To enhance 3D image capture efficiency, we optimized the image acquisition settings of an Olympus FV10i® confocal laser scanning microscope to match our quantification protocols and improve cell fate classification. The methods developed in this study will allow for a more time efficient and accurate software based, operator validated, stem cell fate classification and quantification from 2D and 3D images, and yield the highest ≥94.4% correspondence with human recognized objects.

  8. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  9. Fractal image coding based on replaced domain pools

    NASA Astrophysics Data System (ADS)

    Harada, Masaki; Fujii, Toshiaki; Kimoto, Tadahiko; Tanimoto, Masayuki

    1998-01-01

    Fractal image coding based on iterated function system has been attracting much interest because of the possibilities of drastic data compression. It performs compression by using the self-similarity included in an image. In the conventional schemes, under the assumption of the self- similarity in the image, each block is mapped from that larger block which is considered as the most suitable block to approximate the range block. However, even if the exact self-similarity of an image is found at the encoder, it hardly holds at the decoder because a domain pool of the encoder is different from that of the decoder. In this paper, we prose a fractal image coding scheme by using domain pools replaced with decoded or transformed values to reduce the difference between the domain pools of the encoder and that of the decoder. The proposed scheme performs two-stage encoding. The domain pool is replaced with decoded non-contractive blocks first and then with transformed values for contractive blocks. It is expected that the proposed scheme reduces errors of contractive blocks in the reconstructed image while those of non- contractive blocks are kept unchanged. The experimental result show the effectiveness of the proposed scheme.

  10. Assessment of liver fibrosis with 2-D shear wave elastography in comparison to transient elastography and acoustic radiation force impulse imaging in patients with chronic liver disease.

    PubMed

    Gerber, Ludmila; Kasper, Daniela; Fitting, Daniel; Knop, Viola; Vermehren, Annika; Sprinzl, Kathrin; Hansmann, Martin L; Herrmann, Eva; Bojunga, Joerg; Albert, Joerg; Sarrazin, Christoph; Zeuzem, Stefan; Friedrich-Rust, Mireen

    2015-09-01

    Two-dimensional shear wave elastography (2-D SWE) is an ultrasound-based elastography method integrated into a conventional ultrasound machine. It can evaluate larger regions of interest and, therefore, might be better at determining the overall fibrosis distribution. The aim of this prospective study was to compare 2-D SWE with the two best evaluated liver elastography methods, transient elastography and acoustic radiation force impulse (point SWE using acoustic radiation force impulse) imaging, in the same population group. The study included 132 patients with chronic hepatopathies, in which liver stiffness was evaluated using transient elastography, acoustic radiation force impulse imaging and 2-D SWE. The reference methods were liver biopsy for the assessment of liver fibrosis (n = 101) and magnetic resonance imaging/computed tomography for the diagnosis of liver cirrhosis (n = 31). No significant difference in diagnostic accuracy, assessed as the area under the receiver operating characteristic curve (AUROC), was found between the three elastography methods (2-D SWE, transient elastography, acoustic radiation force impulse imaging) for the diagnosis of significant and advanced fibrosis and liver cirrhosis in the "per protocol" (AUROCs for fibrosis stages ≥2: 0.90, 0.95 and 0.91; for fibrosis stage [F] ≥3: 0.93, 0.95 and 0.94; for F = 4: 0.92, 0.96 and 0.92) and "intention to diagnose" cohort (AUROCs for F ≥2: 0.87, 0.92 and 0.91; for F ≥3: 0.91, 0.93 and 0.94; for F = 4: 0.88, 0.90 and 0.89). Therefore, 2-D SWE, ARFI imaging and transient elastography seem to be comparably good methods for non-invasive assessment of liver fibrosis.

  11. Comparative noise performance of a coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Piper, Jonathan; Yuen, Peter; Godfree, Peter; Ding, Mengjia; Soori, Umair; Selvagumar, Senthurran; James, David

    2016-10-01

    Novel types of spectral sensors using coded apertures may offer various advantages over conventional designs, especially the possibility of compressive measurements that could exceed the expected spatial, temporal or spectral resolution of the system. However, the nature of the measurement process imposes certain limitations, especially on the noise performance of the sensor. This paper considers a particular type of coded-aperture spectral imager and uses analytical and numerical modelling to compare its expected noise performance with conventional hyperspectral sensors. It is shown that conventional sensors may have an advantage in conditions where signal levels are high, such as bright light or slow scanning, but that coded-aperture sensors may be advantageous in low-signal conditions.

  12. Towards the clinical integration of an image-guided navigation system for percutaneous liver tumor ablation using freehand 2D ultrasound images.

    PubMed

    Spinczyk, Dominik

    2015-01-01

    Primary and metastatic liver tumors constitute a significant challenge for contemporary medicine. Several improvements are currently being developed and implemented to advance image navigation systems for percutaneous liver focal lesion ablation in clinical applications at the diagnosis, planning and intervention stages. First, the automatic generation of an anatomically accurate parametric model of the preoperative patient liver was proposed in addition to a method to visually evaluate and make manual corrections. Second, a marker was designed to facilitate rigid registration between the model of the preoperative patient liver and the patient during treatment. A specific approach was implemented and tested for rigid mapping by continuously tracking a set of uniquely identified markers and by accounting for breathing motion, facilitating the determination of the optimal breathing phase for needle insertion into the liver tissue. Third, to overcome the challenge of tracking the absolute position of the planned target point, an intra-operative ultrasound (US) system was integrated based on the Public Software Library for UltraSound and OpenIGTLink protocol, which tracks breathing motion in a 2D time sequence of US images. Additionally, to improve the visibility of liver focal lesions, an approach to determine spatio-temporal correspondence between the US sequence and the 4D computed tomography (CT) examination was developed, implemented and tested. This proposed method of processing anatomical model, rigid registration approach and the implemented US tracking and fusion method were tested in 20 anonymized CT and in 10 clinical cases, respectively. The presented methodology can be applied and used with any older 2D US systems, which are currently commonly used in clinical practice.

  13. Intraoperative evaluation of device placement in spine surgery using known-component 3D-2D image registration.

    PubMed

    Uneri, A; De Silva, T; Goerres, J; Jacobson, M W; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Khanna, A J; Osgood, G M; Wolinsky, J-P; Siewerdsen, J H

    2017-04-21

    Intraoperative x-ray radiography/fluoroscopy is commonly used to assess the placement of surgical devices in the operating room (e.g. spine pedicle screws), but qualitative interpretation can fail to reliably detect suboptimal delivery and/or breach of adjacent critical structures. We present a 3D-2D image registration method wherein intraoperative radiographs are leveraged in combination with prior knowledge of the patient and surgical components for quantitative assessment of device placement and more rigorous quality assurance (QA) of the surgical product. The algorithm is based on known-component registration (KC-Reg) in which patient-specific preoperative CT and parametric component models are used. The registration performs optimization of gradient similarity, removes the need for offline geometric calibration of the C-arm, and simultaneously solves for multiple component bodies, thereby allowing QA in a single step (e.g. spinal construct with 4-20 screws). Performance was tested in a spine phantom, and first clinical results are reported for QA of transpedicle screws delivered in a patient undergoing thoracolumbar spine surgery. Simultaneous registration of ten pedicle screws (five contralateral pairs) demonstrated mean target registration error (TRE) of 1.1  ±  0.1 mm at the screw tip and 0.7  ±  0.4° in angulation when a prior geometric calibration was used. The calibration-free formulation, with the aid of component collision constraints, achieved TRE of 1.4  ±  0.6 mm. In all cases, a statistically significant improvement (p  <  0.05) was observed for the simultaneous solutions in comparison to previously reported sequential solution of individual components. Initial application in clinical data in spine surgery demonstrated TRE of 2.7  ±  2.6 mm and 1.5  ±  0.8°. The KC-Reg algorithm offers an independent check and quantitative QA of the surgical product using radiographic/fluoroscopic views

  14. Wavelet-based zerotree coding of aerospace images

    NASA Astrophysics Data System (ADS)

    Franques, Victoria T.; Jain, Vijay K.

    1996-06-01

    This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.

  15. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  16. Peak transform for efficient image representation and coding.

    PubMed

    He, Zhihai

    2007-07-01

    In this work, we introduce a nonlinear geometric transform, called peak transform (PT), for efficient image representation and coding. The proposed PT is able to convert high-frequency signals into low-frequency ones, making them much easier to be compressed. Coupled with wavelet transform and subband decomposition, the PT is able to significantly reduce signal energy in high-frequency subbands and achieve a significant transform coding gain. This has important applications in efficient data representation and compression. To maximize the transform coding gain, we develop a dynamic programming solution for optimum PT design. Based on PT, we design an image encoder, called the PT encoder, for efficient image compression. Our extensive experimental results demonstrate that, in wavelet-based subband decomposition, the signal energy in high-frequency subbands can be reduced by up to 60% if a PT is applied. The PT image encoder outperforms state-of-the-art JPEG2000 and H.264 (INTRA) encoders by up to 2-3 dB in peak signal-to-noise ratio (PSNR), especially for images with a significant amount of high-frequency components. Our experimental results also show that the proposed PT is able to efficiently capture and preserve high-frequency image features (e.g., edges) and yields significantly improved visual quality. We believe that the concept explored in this work, designing a nonlinear transform to convert hard-to-compress signals into easy ones, is very useful. We hope this work would motivate more research work along this direction.

  17. Computational radiology and imaging with the MCNP Monte Carlo code

    SciTech Connect

    Estes, G.P.; Taylor, W.M.

    1995-05-01

    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  18. 157km BOTDA with pulse coding and image processing

    NASA Astrophysics Data System (ADS)

    Qian, Xianyang; Wang, Zinan; Wang, Song; Xue, Naitian; Sun, Wei; Zhang, Li; Zhang, Bin; Rao, Yunjiang

    2016-05-01

    A repeater-less Brillouin optical time-domain analyzer (BOTDA) with 157.68km sensing range is demonstrated, using the combination of random fiber laser Raman pumping and low-noise laser-diode-Raman pumping. With optical pulse coding (OPC) and Non Local Means (NLM) image processing, temperature sensing with +/-0.70°C uncertainty and 8m spatial resolution is experimentally demonstrated. The image processing approach has been proved to be compatible with OPC, and it further increases the figure-of-merit (FoM) of the system by 57%.

  19. MTF characterization in 2D and 3D for a high resolution, large field of view flat panel imager for cone beam CT

    NASA Astrophysics Data System (ADS)

    Shah, Jainil; Mann, Steve D.; Tornai, Martin P.; Richmond, Michelle; Zentai, George

    2014-03-01

    The 2D and 3D modulation transfer functions (MTFs) of a custom made, large 40x30cm2 area, 600- micron CsI-TFT based flat panel imager having 127-micron pixellation, along with the micro-fiber scintillator structure, were characterized in detail using various techniques. The larger area detector yields a reconstructed FOV of 25cm diameter with an 80cm SID in CT mode. The MTFs were determined with 1x1 (intrinsic) binning. The 2D MTFs were determined using a 50.8 micron tungsten wire and a solid lead edge, and the 3D MTF was measured using a custom made phantom consisting of three nearly orthogonal 50.8 micron tungsten wires suspended in an acrylic cubic frame. The 2D projection data was reconstructed using an iterative OSC algorithm using 16 subsets and 5 iterations. As additional verification of the resolution, along with scatter, the Catphan® phantom was also imaged and reconstructed with identical parameters. The measured 2D MTF was ~4% using the wire technique and ~1% using the edge technique at the 3.94 lp/mm Nyquist cut-off frequency. The average 3D MTF measured along the wires was ~8% at the Nyquist. At 50% MTF, the resolutions were 1.2 and 2.1 lp/mm in 2D and 3D, respectively. In the Catphan® phantom, the 1.7 lp/mm bars were easily observed. Lastly, the 3D MTF measured on the three wires has an observed 5.9% RMSD, indicating that the resolution of the imaging system is uniform and spatially independent. This high performance detector is integrated into a dedicated breast SPECT-CT imaging system.