An efficient direct method for image registration of flat objects
NASA Astrophysics Data System (ADS)
Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei
2017-09-01
Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.
Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam
2013-10-01
Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.
Determination of skeleton and sign map for phase obtaining from a single ESPI image
NASA Astrophysics Data System (ADS)
Yang, Xia; Yu, Qifeng; Fu, Sihua
2009-06-01
A robust method of determining the sign map and skeletons for ESPI images is introduced in this paper. ESPI images have high speckle noise which makes it difficult to obtain the fringe information, especially from a single image. To overcome the effects of high speckle noise, local directional computing windows are designed according to the fringe directions. Then by calculating the gradients from the filtered image in directional windows, sign map and good skeletons can be determined robustly. Based on the sign map, single image phase-extracting methods such as quadrature transform can be improved. And based on skeletons, fringe phases can be obtained directly by normalization methods. Experiments show that this new method is robust and effective for extracting phase from a single ESPI fringe image.
Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.
Germino, Mary; Carson, Richard E
2018-02-01
Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.
Nonlinear PET parametric image reconstruction with MRI information using kernel method
NASA Astrophysics Data System (ADS)
Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2017-03-01
Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Zhang, Xin; Cui, Jintian; Wang, Weisheng; Lin, Chao
2017-01-01
To address the problem of image texture feature extraction, a direction measure statistic that is based on the directionality of image texture is constructed, and a new method of texture feature extraction, which is based on the direction measure and a gray level co-occurrence matrix (GLCM) fusion algorithm, is proposed in this paper. This method applies the GLCM to extract the texture feature value of an image and integrates the weight factor that is introduced by the direction measure to obtain the final texture feature of an image. A set of classification experiments for the high-resolution remote sensing images were performed by using support vector machine (SVM) classifier with the direction measure and gray level co-occurrence matrix fusion algorithm. Both qualitative and quantitative approaches were applied to assess the classification results. The experimental results demonstrated that texture feature extraction based on the fusion algorithm achieved a better image recognition, and the accuracy of classification based on this method has been significantly improved. PMID:28640181
Directional sinogram interpolation for sparse angular acquisition in cone-beam computed tomography.
Zhang, Hua; Sonke, Jan-Jakob
2013-01-01
Cone-beam (CB) computed tomography (CT) is widely used in the field of medical imaging for guidance. Inspired by Betram's directional interpolation (BDI) methods, directional sinogram interpolation (DSI) was implemented to generate more CB projections by optimized (iterative) double-orientation estimation in sinogram space and directional interpolation. A new CBCT was subsequently reconstructed with the Feldkamp algorithm using both the original and interpolated CB projections. The proposed method was evaluated on both phantom and clinical data, and image quality was assessed by correlation ratio (CR) between the interpolated image and a gold standard obtained from full measured projections. Additionally, streak artifact reduction and image blur were assessed. In a CBCT reconstructed by 40 acquired projections over an arc of 360 degree, streak artifacts dropped 20.7% and 6.7% in a thorax phantom, when our method was compared to linear interpolation (LI) and BDI methods. Meanwhile, image blur was assessed by a head-and-neck phantom, where image blur of DSI was 20.1% and 24.3% less than LI and BDI. When our method was compared to LI and DI methods, CR increased by 4.4% and 3.1%. Streak artifacts of sparsely acquired CBCT were decreased by our method and image blur induced by interpolation was constrained to below other interpolation methods.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
NASA Astrophysics Data System (ADS)
Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.
2017-07-01
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.
Cheng, Yuhua; Deng, Yiming; Cao, Jing; Xiong, Xin; Bai, Libing; Li, Zhaojun
2013-01-01
In this article, the state-of-the-art multi-wave and hybrid imaging techniques in the field of nondestructive evaluation and structural health monitoring were comprehensively reviewed. A new direction for assessment and health monitoring of various structures by capitalizing the advantages of those imaging methods was discussed. Although sharing similar system configurations, the imaging physics and principles of multi-wave phenomena and hybrid imaging methods are inherently different. After a brief introduction of nondestructive evaluation (NDE), structure health monitoring (SHM) and their related challenges, several recent advances that have significantly extended imaging methods from laboratory development into practical applications were summarized, followed by conclusions and discussion on future directions. PMID:24287536
Image Mosaic Method Based on SIFT Features of Line Segment
Zhu, Jun; Ren, Mingwu
2014-01-01
This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326
Apparatus and method for a light direction sensor
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2011-01-01
The present invention provides a light direction sensor for determining the direction of a light source. The system includes an image sensor; a spacer attached to the image sensor, and a pattern mask attached to said spacer. The pattern mask has a slit pattern that as light passes through the slit pattern it casts a diffraction pattern onto the image sensor. The method operates by receiving a beam of light onto a patterned mask, wherein the patterned mask as a plurality of a slit segments. Then, diffusing the beam of light onto an image sensor and determining the direction of the light source.
Eye gazing direction inspection based on image processing technique
NASA Astrophysics Data System (ADS)
Hao, Qun; Song, Yong
2005-02-01
According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.
NASA Astrophysics Data System (ADS)
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.
2016-04-01
Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.
Rapid enumeration of viable bacteria by image analysis
NASA Technical Reports Server (NTRS)
Singh, A.; Pyle, B. H.; McFeters, G. A.
1989-01-01
A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.
Higuchi Dimension of Digital Images
Ahammer, Helmut
2011-01-01
There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854
Mutual conversion between B-mode image and acoustic impedance image
NASA Astrophysics Data System (ADS)
Chean, Tan Wei; Hozumi, Naohiro; Yoshida, Sachiko; Kobayashi, Kazuto; Ogura, Yuki
2017-07-01
To study the acoustic properties of a B-mode image, two ways of analysis methods were proposed in this report. The first method is the conversion of an acoustic impedance image into a B-mode image (Z to B). The time domain reflectometry theory and transmission line model were used as reference in the calculation. The second method is the direct a conversion of B-mode image into an acoustic impedance image (B to Z). The theoretical background of the second method is similar to that of the first method; however, the calculation is in the opposite direction. Significant scatter, refraction, and attenuation were assumed not to take place during the propagation of an ultrasonic wave. Hence, they were ignored in both calculations. In this study, rat cerebellar tissue and human cheek skin were used to determine the feasibility of the first and second methods respectively. Some good results are obtained and hence both methods showed their possible applications in the study of acoustic properties of B-mode images.
Performance Analysis and Experimental Validation of the Direct Strain Imaging Method
Athanasios Iliopoulos; John G. Michopoulos; John C. Hermanson
2013-01-01
Direct Strain Imaging accomplishes full field measurement of the strain tensor on the surface of a deforming body, by utilizing arbitrarily oriented engineering strain measurements originating from digital imaging. In this paper an evaluation of the methodâs performance with respect to its operating parameter space is presented along with a preliminary...
Tie Points Extraction for SAR Images Based on Differential Constraints
NASA Astrophysics Data System (ADS)
Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.
2018-04-01
Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.
Circular Data Images for Directional Data
NASA Technical Reports Server (NTRS)
Morpet, William J.
2004-01-01
Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.
Edge directed image interpolation with Bamberger pyramids
NASA Astrophysics Data System (ADS)
Rosiles, Jose Gerardo
2005-08-01
Image interpolation is a standard feature in digital image editing software, digital camera systems and printers. Classical methods for resizing produce blurred images with unacceptable quality. Bamberger Pyramids and filter banks have been successfully used for texture and image analysis. They provide excellent multiresolution and directional selectivity. In this paper we present an edge-directed image interpolation algorithm which takes advantage of the simultaneous spatial-directional edge localization at the subband level. The proposed algorithm outperform classical schemes like bilinear and bicubic schemes from the visual and numerical point of views.
Compressed domain indexing of losslessly compressed images
NASA Astrophysics Data System (ADS)
Schaefer, Gerald
2001-12-01
Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.
Rapid 3D bioprinting from medical images: an application to bone scaffolding
NASA Astrophysics Data System (ADS)
Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.
2018-03-01
Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
NASA Astrophysics Data System (ADS)
Yu, Fei; Hui, Mei; Zhao, Yue-jin
2009-08-01
The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Vessel Enhancement and Segmentation of 4D CT Lung Image Using Stick Tensor Voting
NASA Astrophysics Data System (ADS)
Cong, Tan; Hao, Yang; Jingli, Shi; Xuan, Yang
2016-12-01
Vessel enhancement and segmentation plays a significant role in medical image analysis. This paper proposes a novel vessel enhancement and segmentation method for 4D CT lung image using stick tensor voting algorithm, which focuses on addressing the vessel distortion issue of vessel enhancement diffusion (VED) method. Furthermore, the enhanced results are easily segmented using level-set segmentation. In our method, firstly, vessels are filtered using Frangi's filter to reduce intrapulmonary noises and extract rough blood vessels. Secondly, stick tensor voting algorithm is employed to estimate the correct direction along the vessel. Then the estimated direction along the vessel is used as the anisotropic diffusion direction of vessel in VED algorithm, which makes the intensity diffusion of points locating at the vessel wall be consistent with the directions of vessels and enhance the tubular features of vessels. Finally, vessels can be extracted from the enhanced image by applying level-set segmentation method. A number of experiments results show that our method outperforms traditional VED method in vessel enhancement and results in satisfied segmented vessels.
Kim, Minsoo; Jung, Na Young; Park, Chang Kyu; Chang, Won Seok; Jung, Hyun Ho; Chang, Jin Woo
2018-06-01
Stereotactic procedures are image guided, often using magnetic resonance (MR) images limited by image distortion, which may influence targets for stereotactic procedures. The aim of this work was to assess methods of identifying target coordinates for stereotactic procedures with MR in multiple phase-encoding directions. In 30 patients undergoing deep brain stimulation, we acquired 5 image sets: stereotactic brain computed tomography (CT), T2-weighted images (T2WI), and T1WI in both right-to-left (RL) and anterior-to-posterior (AP) phase-encoding directions. Using CT coordinates as a reference, we analyzed anterior commissure and posterior commissure coordinates to identify any distortion relating to phase-encoding direction. Compared with CT coordinates, RL-directed images had more positive x-axis values (0.51 mm in T1WI, 0.58 mm in T2WI). AP-directed images had more negative y-axis values (0.44 mm in T1WI, 0.59 mm in T2WI). We adopted 2 methods to predict CT coordinates with MR image sets: parallel translation and selective choice of axes according to phase-encoding direction. Both were equally effective at predicting CT coordinates using only MR; however, the latter may be easier to use in clinical settings. Acquiring MR in multiple phase-encoding directions and selecting axes according to the phase-encoding direction allows identification of more accurate coordinates for stereotactic procedures. © 2018 S. Karger AG, Basel.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
High-resolution electron microscopy and its applications.
Li, F H
1987-12-01
A review of research on high-resolution electron microscopy (HREM) carried out at the Institute of Physics, the Chinese Academy of Sciences, is presented. Apart from the direct observation of crystal and quasicrystal defects for some alloys, oxides, minerals, etc., and the structure determination for some minute crystals, an approximate image-contrast theory named pseudo-weak-phase object approximation (PWPOA), which shows the image contrast change with crystal thickness, is described. Within the framework of PWPOA, the image contrast of lithium ions in the crystal of R-Li2Ti3O7 has been observed. The usefulness of diffraction analysis techniques such as the direct method and Patterson method in HREM is discussed. Image deconvolution and resolution enhancement for weak-phase objects by use of the direct method are illustrated. In addition, preliminary results of image restoration for thick crystals are given.
Directly imaging steeply-dipping fault zones in geothermal fields with multicomponent seismic data
Chen, Ting; Huang, Lianjie
2015-07-30
For characterizing geothermal systems, it is important to have clear images of steeply-dipping fault zones because they may confine the boundaries of geothermal reservoirs and influence hydrothermal flow. Elastic reverse-time migration (ERTM) is the most promising tool for subsurface imaging with multicomponent seismic data. However, conventional ERTM usually generates significant artifacts caused by the cross correlation of undesired wavefields and the polarity reversal of shear waves. In addition, it is difficult for conventional ERTM to directly image steeply-dipping fault zones. We develop a new ERTM imaging method in this paper to reduce these artifacts and directly image steeply-dipping fault zones.more » In our new ERTM method, forward-propagated source wavefields and backward-propagated receiver wavefields are decomposed into compressional (P) and shear (S) components. Furthermore, each component of these wavefields is separated into left- and right-going, or downgoing and upgoing waves. The cross correlation imaging condition is applied to the separated wavefields along opposite propagation directions. For converted waves (P-to-S or S-to-P), the polarity correction is applied to the separated wavefields based on the analysis of Poynting vectors. Numerical imaging examples of synthetic seismic data demonstrate that our new ERTM method produces high-resolution images of steeply-dipping fault zones.« less
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
An edge-directed interpolation method for fetal spine MR images.
Yu, Shaode; Zhang, Rui; Wu, Shibin; Hu, Jiani; Xie, Yaoqin
2013-10-10
Fetal spinal magnetic resonance imaging (MRI) is a prenatal routine for proper assessment of fetus development, especially when suspected spinal malformations occur while ultrasound fails to provide details. Limited by hardware, fetal spine MR images suffer from its low resolution.High-resolution MR images can directly enhance readability and improve diagnosis accuracy. Image interpolation for higher resolution is required in clinical situations, while many methods fail to preserve edge structures. Edge carries heavy structural messages of objects in visual scenes for doctors to detect suspicions, classify malformations and make correct diagnosis. Effective interpolation with well-preserved edge structures is still challenging. In this paper, we propose an edge-directed interpolation (EDI) method and apply it on a group of fetal spine MR images to evaluate its feasibility and performance. This method takes edge messages from Canny edge detector to guide further pixel modification. First, low-resolution (LR) images of fetal spine are interpolated into high-resolution (HR) images with targeted factor by bi-linear method. Then edge information from LR and HR images is put into a twofold strategy to sharpen or soften edge structures. Finally a HR image with well-preserved edge structures is generated. The HR images obtained from proposed method are validated and compared with that from other four EDI methods. Performances are evaluated from six metrics, and subjective analysis of visual quality is based on regions of interest (ROI). All these five EDI methods are able to generate HR images with enriched details. From quantitative analysis of six metrics, the proposed method outperforms the other four from signal-to-noise ratio (SNR), peak signal-to-noise ratio (PSNR), structure similarity index (SSIM), feature similarity index (FSIM) and mutual information (MI) with seconds-level time consumptions (TC). Visual analysis of ROI shows that the proposed method maintains better consistency in edge structures with the original images. The proposed method classifies edge orientations into four categories and well preserves structures. It generates convincing HR images with fine details and is suitable in real-time situations. Iterative curvature-based interpolation (ICBI) method may result in crisper edges, while the other three methods are sensitive to noise and artifacts.
Oriented modulation for watermarking in direct binary search halftone images.
Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der
2012-09-01
In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.
Accessing High Spatial Resolution in Astronomy Using Interference Methods
ERIC Educational Resources Information Center
Carbonel, Cyril; Grasset, Sébastien; Maysonnave, Jean
2018-01-01
In astronomy, methods such as direct imaging or interferometry-based techniques (Michelson stellar interferometry for example) are used for observations. A particular advantage of interferometry is that it permits greater spatial resolution compared to direct imaging with a single telescope, which is limited by diffraction owing to the aperture of…
Deblurring adaptive optics retinal images using deep convolutional neural networks.
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-12-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved.
Deblurring adaptive optics retinal images using deep convolutional neural networks
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-01-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved. PMID:29296496
Processing the image gradient field using a topographic primal sketch approach.
Gambaruto, A M
2015-03-01
The spatial derivatives of the image intensity provide topographic information that may be used to identify and segment objects. The accurate computation of the derivatives is often hampered in medical images by the presence of noise and a limited resolution. This paper focuses on accurate computation of spatial derivatives and their subsequent use to process an image gradient field directly, from which an image with improved characteristics can be reconstructed. The improvements include noise reduction, contrast enhancement, thinning object contours and the preservation of edges. Processing the gradient field directly instead of the image is shown to have numerous benefits. The approach is developed such that the steps are modular, allowing the overall method to be improved and possibly tailored to different applications. As presented, the approach relies on a topographic representation and primal sketch of an image. Comparisons with existing image processing methods on a synthetic image and different medical images show improved results and accuracy in segmentation. Here, the focus is on objects with low spatial resolution, which is often the case in medical images. The methods developed show the importance of improved accuracy in derivative calculation and the potential in processing the image gradient field directly. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, T; Dong, X; Petrongolo, M
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less
Open-loop measurement of data sampling point for SPM
NASA Astrophysics Data System (ADS)
Wang, Yueyu; Zhao, Xuezeng
2006-03-01
SPM (Scanning Probe Microscope) provides "three-dimensional images" with nanometer level resolution, and some of them can be used as metrology tools. However, SPM's images are commonly distorted by non-ideal properties of SPM's piezoelectric scanner, which reduces metrological accuracy and data repeatability. In order to eliminate this limit, an "open-loop sampling" method is presented. In this method, the positional values of sampling points in all three directions on the surface of the sample are measured by the position sensor and recorded in SPM's image file, which is used to replace the image file from a conventional SPM. Because the positions in X and Y directions are measured at the same time of sampling height information in Z direction, the image distortion caused by scanner locating error can be reduced by proper image processing algorithm.
Stem Cell Monitoring with a Direct or Indirect Labeling Method.
Kim, Min Hwan; Lee, Yong Jin; Kang, Joo Hyun
2016-12-01
The molecular imaging techniques allow monitoring of the transplanted cells in the same individuals over time, from early localization to the survival, migration, and differentiation. Generally, there are two methods of stem cell labeling: direct and indirect labeling methods. The direct labeling method introduces a labeling agent into the cell, which is stably incorporated or attached to the cells prior to transplantation. Direct labeling of cells with radionuclides is a simple method with relatively fewer adverse events related to genetic responses. However, it can only allow short-term distribution of transplanted cells because of the decreasing imaging signal with radiodecay, according to the physical half-lives, or the signal becomes more diffuse with cell division and dispersion. The indirect labeling method is based on the expression of a reporter gene transduced into the cell before transplantation, which is then visualized upon the injection of an appropriate probe or substrate. In this review, various imaging strategies to monitor the survival and behavior change of transplanted stem cells are covered. Taking these new approaches together, the direct and indirect labeling methods may provide new insights on the roles of in vivo stem cell monitoring, from bench to bedside.
Enhanced image fusion using directional contrast rules in fuzzy transform domain.
Nandal, Amita; Rosales, Hamurabi Gamboa
2016-01-01
In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing
2015-01-01
Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently. PMID:25849350
Zeng, Yiliang; Lan, Jinhui; Ran, Bin; Wang, Qi; Gao, Jing
2015-01-01
Due to the rapid development of motor vehicle Driver Assistance Systems (DAS), the safety problems associated with automatic driving have become a hot issue in Intelligent Transportation. The traffic sign is one of the most important tools used to reinforce traffic rules. However, traffic sign image degradation based on computer vision is unavoidable during the vehicle movement process. In order to quickly and accurately recognize traffic signs in motion-blurred images in DAS, a new image restoration algorithm based on border deformation detection in the spatial domain is proposed in this paper. The border of a traffic sign is extracted using color information, and then the width of the border is measured in all directions. According to the width measured and the corresponding direction, both the motion direction and scale of the image can be confirmed, and this information can be used to restore the motion-blurred image. Finally, a gray mean grads (GMG) ratio is presented to evaluate the image restoration quality. Compared to the traditional restoration approach which is based on the blind deconvolution method and Lucy-Richardson method, our method can greatly restore motion blurred images and improve the correct recognition rate. Our experiments show that the proposed method is able to restore traffic sign information accurately and efficiently.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies.
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-07
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18 F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans-each containing 1/8th of the total number of events-were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18 F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of [Formula: see text], the tracer transport rate (ml · min -1 · ml -1 ), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced [Formula: see text] maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced [Formula: see text] estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves -TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in-vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans - each containing 1/8th of the total number of events - were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard Ordered Subset Expectation Maximization (OSEM) reconstruction algorithm on one side, and the One-Step Late Maximum a Posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
NASA Astrophysics Data System (ADS)
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
Dynamic PET myocardial perfusion imaging (MPI) used in conjunction with tracer kinetic modeling enables the quantification of absolute myocardial blood flow (MBF). However, MBF maps computed using the traditional indirect method (i.e. post-reconstruction voxel-wise fitting of kinetic model to PET time-activity-curves-TACs) suffer from poor signal-to-noise ratio (SNR). Direct reconstruction of kinetic parameters from raw PET projection data has been shown to offer parametric images with higher SNR compared to the indirect method. The aim of this study was to extend and evaluate the performance of a direct parametric reconstruction method using in vivo dynamic PET MPI data for the purpose of quantifying MBF. Dynamic PET MPI studies were performed on two healthy pigs using a Siemens Biograph mMR scanner. List-mode PET data for each animal were acquired following a bolus injection of ~7-8 mCi of 18F-flurpiridaz, a myocardial perfusion agent. Fully-3D dynamic PET sinograms were obtained by sorting the coincidence events into 16 temporal frames covering ~5 min after radiotracer administration. Additionally, eight independent noise realizations of both scans—each containing 1/8th of the total number of events—were generated from the original list-mode data. Dynamic sinograms were then used to compute parametric maps using the conventional indirect method and the proposed direct method. For both methods, a one-tissue compartment model accounting for spillover from the left and right ventricle blood-pools was used to describe the kinetics of 18F-flurpiridaz. An image-derived arterial input function obtained from a TAC taken in the left ventricle cavity was used for tracer kinetic analysis. For the indirect method, frame-by-frame images were estimated using two fully-3D reconstruction techniques: the standard ordered subset expectation maximization (OSEM) reconstruction algorithm on one side, and the one-step late maximum a posteriori (OSL-MAP) algorithm on the other side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Novel Method for Vessel Cross-Sectional Shear Wave Imaging.
He, Qiong; Li, Guo-Yang; Lee, Fu-Feng; Zhang, Qihao; Cao, Yanping; Luo, Jianwen
2017-07-01
Many studies have investigated the applications of shear wave imaging (SWI) to vascular elastography, mainly on the longitudinal section of vessels. It is important to investigate SWI in the arterial cross section when evaluating anisotropy of the vessel wall or complete plaque composition. Here, we proposed a novel method based on the coordinate transformation and directional filter in the polar coordinate system to achieve vessel cross-sectional shear wave imaging. In particular, ultrasound radiofrequency data were transformed from the Cartesian to the polar coordinate system; the radial displacements were then estimated directly. Directional filtering was performed along the circumferential direction to filter out the reflected waves. The feasibility of the proposed vessel cross-sectional shear wave imaging method was investigated through phantom experiments and ex vivo and in vivo studies. Our results indicated that the dispersion relation of the shear wave (i.e., the guided circumferential wave) within the vessel can be measured via the present method, and the elastic modulus of the vessel can be determined. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Research on fusion algorithm of polarization image in tetrolet domain
NASA Astrophysics Data System (ADS)
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guanglei, E-mail: guangleizhang@bjtu.edu.cn; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044; Pu, Huangsheng
2015-02-23
Images of pharmacokinetic parameters (also known as parametric images) in dynamic fluorescence molecular tomography (FMT) can provide three-dimensional metabolic information for biological studies and drug development. However, the ill-posed nature of FMT and the high temporal variation of fluorophore concentration together make it difficult to obtain accurate parametric images in small animals in vivo. In this letter, we present a method to directly reconstruct the parametric images from the boundary measurements based on hybrid FMT/X-ray computed tomography (XCT) system. This method can not only utilize structural priors obtained from the XCT system to mitigate the ill-posedness of FMT but alsomore » make full use of the temporal correlations of boundary measurements to model the high temporal variation of fluorophore concentration. The results of numerical simulation and mouse experiment demonstrate that the proposed method leads to significant improvements in the reconstruction quality of parametric images.« less
Vadnjal, Ana Laura; Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H
2013-03-20
We present a method to determine micro and nano in-plane displacements based on the phase singularities generated by application of directional wavelet transforms to speckle pattern images. The spatial distribution of the obtained phase singularities by the wavelet transform configures a network, which is characterized by two quasi-orthogonal directions. The displacement value is determined by identifying the intersection points of the network before and after the displacement produced by the tested object. The performance of this method is evaluated using simulated speckle patterns and experimental data. The proposed approach is compared with the optical vortex metrology and digital image correlation methods in terms of performance and noise robustness, and the advantages and limitations associated to each method are also discussed.
USDA-ARS?s Scientific Manuscript database
Segmentation is the first step in image analysis to subdivide an image into meaningful regions. The segmentation result directly affects the subsequent image analysis. The objective of the research was to develop an automatic adjustable algorithm for segmentation of color images, using linear suppor...
Digital processing of radiographic images from PACS to publishing.
Christian, M E; Davidson, H C; Wiggins, R H; Berges, G; Cannon, G; Jackson, G; Chapman, B; Harnsberger, H R
2001-03-01
Several studies have addressed the implications of filmless radiologic imaging on telemedicine, diagnostic ability, and electronic teaching files. However, many publishers still require authors to submit hard-copy images for publication of articles and textbooks. This study compares the quality digital images directly exported from picture archive and communications systems (PACS) to images digitized from radiographic film. The authors evaluated the quality of publication-grade glossy photographs produced from digital radiographic images using 3 different methods: (1) film images digitized using a desktop scanner and then printed, (2) digital images obtained directly from PACS then printed, and (3) digital images obtained from PACS and processed to improve sharpness prior to printing. Twenty images were printed using each of the 3 different methods and rated for quality by 7 radiologists. The results were analyzed for statistically significant differences among the image sets. Subjective evaluations of the filmless images found them to be of equal or better quality than the digitized images. Direct electronic transfer of PACS images reduces the number of steps involved in creating publication-quality images as well as providing the means to produce high-quality radiographic images in a digital environment.
Wang, Jin; Zhang, Chen; Wang, Yuanyuan
2017-05-30
In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image reconstruction. However, classical TV based algorithm fails to preserve the edges and texture details of the image because it is not sensitive to the direction of the image. Therefore, it is of great significance to develop a new PAT reconstruction algorithm to effectively solve the drawback of TV. In this paper, a directional total variation with adaptive directivity (DDTV) model-based PAT image reconstruction algorithm, which weightedly sums the image gradients based on the spatially varying directivity pattern of the image is proposed to overcome the shortcomings of TV. The orientation field of the image is adaptively estimated through a gradient-based approach. The image gradients are weighted at every pixel based on both its anisotropic direction and another parameter, which evaluates the estimated orientation field reliability. An efficient algorithm is derived to solve the iteration problem associated with DDTV and possessing directivity of the image adaptively updated for each iteration step. Several texture images with various directivity patterns are chosen as the phantoms for the numerical simulations. The 180-, 90- and 30-view circular scans are conducted. Results obtained show that the DDTV-based PAT reconstructed algorithm outperforms the filtered back-projection method (FBP) and TV algorithms in the quality of reconstructed images with the peak signal-to-noise rations (PSNR) exceeding those of TV and FBP by about 10 and 18 dB, respectively, for all cases. The Shepp-Logan phantom is studied with further discussion of multimode scanning, convergence speed, robustness and universality aspects. In-vitro experiments are performed for both the sparse-view circular scanning and linear scanning. The results further prove the effectiveness of the DDTV, which shows better results than that of the TV with sharper image edges and clearer texture details. Both numerical simulation and in vitro experiments confirm that the DDTV provides a significant quality improvement of PAT reconstructed images for various directivity patterns.
Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI
NASA Technical Reports Server (NTRS)
Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.
2001-01-01
Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.
Kaseno, Kenichi; Hisazaki, Kaori; Nakamura, Kohki; Ikeda, Etsuko; Hasegawa, Kanae; Aoyama, Daisetsu; Shiomi, Yuichiro; Ikeda, Hiroyuki; Morishita, Tetsuji; Ishida, Kentaro; Amaya, Naoki; Uzui, Hiroyasu; Tada, Hiroshi
2018-04-14
Intracardiac echocardiographic (ICE) imaging might be useful for integrating three-dimensional computed tomographic (CT) images for left atrial (LA) catheter navigation during atrial fibrillation (AF) ablation. However, the optimal CT image integration method using ICE has not been established. This study included 52 AF patients who underwent successful circumferential pulmonary vein isolation (CPVI). In all patients, CT image integration was performed after the CPVI with the following two methods: (1) using ICE images of the LA derived from the right atrium and right ventricular outflow tract (RA-merge) and (2) using ICE images of the LA directly derived from the LA added to the image for the RA-merge (LA-merge). The accuracy of these two methods was assessed by the distances between the integrated CT image and ICE image (ICE-to-CT distance), and between the CT image and actual ablated sites for the CPVI (CT-to-ABL distance). The mean ICE-to-CT distance was comparable between the two methods (RA-merge = 1.6 ± 0.5 mm, LA-merge = 1.7 ± 0.4 mm; p = 0.33). However, the mean CT-to-ABL distance was shorter for the LA-merge (2.1 ± 0.6 mm) than RA-merge (2.5 ± 0.8 mm; p < 0.01). The LA, especially the left-sided PVs and LA roof, was more sharply delineated by direct LA imaging, and whereas the greatest CT-to-ABL distance was observed at the roof portion of the left superior PV (3.7 ± 2.8 mm) after the RA-merge, it improved to 2.6 ± 1.9 mm after the LA-merge (p < 0.01). Additional ICE images of the LA directly acquired from the LA might lead to a greater accuracy of the CT image integration for the CVPI.
Multi-view 3D echocardiography compounding based on feature consistency
NASA Astrophysics Data System (ADS)
Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.
2011-09-01
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Direct Estimation of Kinetic Parametric Images for Dynamic PET
Wang, Guobao; Qi, Jinyi
2013-01-01
Dynamic positron emission tomography (PET) can monitor spatiotemporal distribution of radiotracer in vivo. The spatiotemporal information can be used to estimate parametric images of radiotracer kinetics that are of physiological and biochemical interests. Direct estimation of parametric images from raw projection data allows accurate noise modeling and has been shown to offer better image quality than conventional indirect methods, which reconstruct a sequence of PET images first and then perform tracer kinetic modeling pixel-by-pixel. Direct reconstruction of parametric images has gained increasing interests with the advances in computing hardware. Many direct reconstruction algorithms have been developed for different kinetic models. In this paper we review the recent progress in the development of direct reconstruction algorithms for parametric image estimation. Algorithms for linear and nonlinear kinetic models are described and their properties are discussed. PMID:24396500
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Image based method for aberration measurement of lithographic tools
NASA Astrophysics Data System (ADS)
Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa
2018-01-01
Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.
Acousto-optic tunable filter chromatic aberration analysis and reduction with auto-focus system
NASA Astrophysics Data System (ADS)
Wang, Yaoli; Chen, Yuanyuan
2018-07-01
An acousto-optic tunable filter (AOTF) displays optical band broadening and sidelobes as a result of the coupling between the acoustic wave and optical waves of different wavelengths. These features were analysed by wave-vector phase matching between the optical and acoustic waves. A crossed-line test board was imaged by an AOTF multi-spectral imaging system, showing image blurring in the direction of diffraction and image sharpness in the orthogonal direction produced by the greater bandwidth and sidelobes in the former direction. Applying the secondary-imaging principle and considering the wavelength-dependent refractive index, focal length varies over the broad wavelength range. An automatic focusing method is therefore proposed for use in AOTF multi-spectral imaging systems. A new method for image-sharpness evaluation, based on improved Structure Similarity Index Measurement (SSIM), is also proposed, based on the characteristics of the AOTF imaging system. Compared with the traditional gradient operator, as same as it, the new evaluation function realized the evaluation between different image quality, thus could achieve the automatic focusing for different multispectral images.
Lipman, Samantha L; Rouze, Ned C; Palmeri, Mark L; Nightingale, Kathryn R
2016-08-01
Shear waves propagating through interfaces where there is a change in stiffness cause reflected waves that can lead to artifacts in shear wave speed (SWS) reconstructions. Two-dimensional (2-D) directional filters are commonly used to reduce in-plane reflected waves; however, SWS artifacts arise from both in- and out-of-imaging-plane reflected waves. Herein, we introduce 3-D shear wave reconstruction methods as an extension of the previous 2-D estimation methods and quantify the reduction in image artifacts through the use of volumetric SWS monitoring and 4-D-directional filters. A Gaussian acoustic radiation force impulse excitation was simulated in phantoms with Young's modulus ( E ) of 3 kPa and a 5-mm spherical lesion with E = 6, 12, or 18.75 kPa. The 2-D-, 3-D-, and 4-D-directional filters were applied to the displacement profiles to reduce in-and out-of-plane reflected wave artifacts. Contrast-to-noise ratio and SWS bias within the lesion were calculated for each reconstructed SWS image to evaluate the image quality. For 2-D SWS image reconstructions, the 3-D-directional filters showed greater improvements in image quality than the 2-D filters, and the 4-D-directional filters showed marginal improvement over the 3-D filters. Although 4-D-directional filters can further reduce the impact of large magnitude out-of-plane reflection artifacts in SWS images, computational overhead and transducer costs to acquire 3-D data may outweigh the modest improvements in image quality. The 4-D-directional filters have the largest impact in reducing reflection artifacts in 3-D SWS volumes.
NASA Astrophysics Data System (ADS)
Rykaczewski, Konrad; Landin, Trevan; Walker, Marlon L.; Scott, John Henry J.; Varanasi, Kripa K.
2012-11-01
Nanostructured surfaces with special wetting properties have the potential to transform number of industries, including power generation, water desalination, gas and oil production, and microelectronics thermal management. Predicting the wetting properties of these surfaces requires detailed knowledge of the geometry and the composition of the contact volume linking the droplet to the underlying substrate. Surprisingly, a general nano-to-microscale method for direct imaging of such interfaces has previously not been developed. Here we introduce a three dimensional imaging method which resolves this one-hundred-year-old metrology gap in wetting research. Specifically, we demonstrate direct nano-to-microscale imaging of complex fluidic interfaces using cryofixation in combination with cryo-FIB/SEM. We show that application of this method yields previously unattainable quantitative information about the interfacial geometry of water condensed on silicon nanowire forests with hydrophilic and hydrophobic surface termination in the presence or absence of an intermediate water repelling oil. We also discuss imaging artifacts and the advantages of secondary and backscatter electron imaging, Energy Dispersive Spectrometry (EDS), and three dimensional FIB/SEM tomography.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
[Evaluation of Image Quality of Readout Segmented EPI with Readout Partial Fourier Technique].
Yoshimura, Yuuki; Suzuki, Daisuke; Miyahara, Kanae
Readout segmented EPI (readout segmentation of long variable echo-trains: RESOLVE) segmented k-space in the readout direction. By using the partial Fourier method in the readout direction, the imaging time was shortened. However, the influence on image quality due to insufficient data sampling is concerned. The setting of the partial Fourier method in the readout direction in each segment was changed. Then, we examined signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and distortion ratio for changes in image quality due to differences in data sampling. As the number of sampling segments decreased, SNR and CNR showed a low value. In addition, the distortion ratio did not change. The image quality of minimum sampling segments is greatly different from full data sampling, and caution is required when using it.
Comparison of different methods for gender estimation from face image of various poses
NASA Astrophysics Data System (ADS)
Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko
2003-04-01
Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Vectorial point spread function and optical transfer function in oblique plane imaging.
Kim, Jeongmin; Li, Tongcang; Wang, Yuan; Zhang, Xiang
2014-05-05
Oblique plane imaging, using remote focusing with a tilted mirror, enables direct two-dimensional (2D) imaging of any inclined plane of interest in three-dimensional (3D) specimens. It can image real-time dynamics of a living sample that changes rapidly or evolves its structure along arbitrary orientations. It also allows direct observations of any tilted target plane in an object of which orientational information is inaccessible during sample preparation. In this work, we study the optical resolution of this innovative wide-field imaging method. Using the vectorial diffraction theory, we formulate the vectorial point spread function (PSF) of direct oblique plane imaging. The anisotropic lateral resolving power caused by light clipping from the tilted mirror is theoretically analyzed for all oblique angles. We show that the 2D PSF in oblique plane imaging is conceptually different from the inclined 2D slice of the 3D PSF in conventional lateral imaging. Vectorial optical transfer function (OTF) of oblique plane imaging is also calculated by the fast Fourier transform (FFT) method to study effects of oblique angles on frequency responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, H; Dolly, S; Zhao, T
Purpose: A prototype reconstruction algorithm that can provide direct electron density (ED) images from single energy CT scans is being currently developed by Siemens Healthcare GmbH. This feature can eliminate the need for kV specific calibration curve for radiation treatemnt planning. An added benefit is that beam-hardening artifacts are also reduced on direct-ED images due to the underlying material decomposition. This study is to quantitatively analyze the reduction of beam-hardening artifacts on direct-ED images and suggest additional clinical usages. Methods: HU and direct-ED images were reconstructed on a head phantom scanned on a Siemens Definition AS CT scanner at fivemore » tube potentials of 70kV, 80kV, 100kV, 120kV and 140kV respectively. From these images, mean, standard deviation (SD), and local NPS were calculated for regions of interest (ROI) of same locations and sizes. A complete analysis of beam-hardening artifact reduction and image quality improvement was conducted. Results: Along with the increase of tube potentials, ROI means and SDs decrease on both HU and direct-ED images. The mean value differences between HU and direct-ED images are up to 8% with absolute value of 2.9. Compared to that on HU images, the SDs are lower on direct-ED images, and the differences are up to 26%. Interestingly, the local NPS calculated from direct-ED images shows consistent values in the low spatial frequency domain for images acquired from all tube potential settings, while varied dramatically on HU images. This also confirms the beam -hardening artifact reduction on ED images. Conclusions: The low SDs on direct-ED images and relative consistent NPS values in the low spatial frequency domain indicate a reduction of beam-hardening artifacts. The direct-ED image has the potential to assist in more accurate organ contouring, and is a better fit for the desired purpose of CT simulations for radiotherapy.« less
Fast and robust brain tumor segmentation using level set method with multiple image information.
Lok, Ka Hei; Shi, Lin; Zhu, Xianlun; Wang, Defeng
2017-01-01
Brain tumor segmentation is a challenging task for its variation in intensity. The phenomenon is caused by the inhomogeneous content of tumor tissue and the choice of imaging modality. In 2010 Zhang developed the Selective Binary Gaussian Filtering Regularizing Level Set (SBGFRLS) model that combined the merits of edge-based and region-based segmentation. To improve the SBGFRLS method by modifying the singed pressure force (SPF) term with multiple image information and demonstrate effectiveness of proposed method on clinical images. In original SBGFRLS model, the contour evolution direction mainly depends on the SPF. By introducing a directional term in SPF, the metric could control the evolution direction. The SPF is altered by statistic values enclosed by the contour. This concept can be extended to jointly incorporate multiple image information. The new SPF term is expected to bring a solution for blur edge problem in brain tumor segmentation. The proposed method is validated with clinical images including pre- and post-contrast magnetic resonance images. The accuracy and robustness is compared with sensitivity, specificity, DICE similarity coefficient and Jaccard similarity index. Experimental results show improvement, in particular the increase of sensitivity at the same specificity, in segmenting all types of tumors except for the diffused tumor. The novel brain tumor segmentation method is clinical-oriented with fast, robust and accurate implementation and a minimal user interaction. The method effectively segmented homogeneously enhanced, non-enhanced, heterogeneously-enhanced, and ring-enhanced tumor under MR imaging. Though the method is limited by identifying edema and diffuse tumor, several possible solutions are suggested to turn the curve evolution into a fully functional clinical diagnosis tool.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
van den Bergh, F
2018-03-01
The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.
Evaluation of Deep Learning Based Stereo Matching Methods: from Ground to Aerial Images
NASA Astrophysics Data System (ADS)
Liu, J.; Ji, S.; Zhang, C.; Qin, Z.
2018-05-01
Dense stereo matching has been extensively studied in photogrammetry and computer vision. In this paper we evaluate the application of deep learning based stereo methods, which were raised from 2016 and rapidly spread, on aerial stereos other than ground images that are commonly used in computer vision community. Two popular methods are evaluated. One learns matching cost with a convolutional neural network (known as MC-CNN); the other produces a disparity map in an end-to-end manner by utilizing both geometry and context (known as GC-net). First, we evaluate the performance of the deep learning based methods for aerial stereo images by a direct model reuse. The models pre-trained on KITTI 2012, KITTI 2015 and Driving datasets separately, are directly applied to three aerial datasets. We also give the results of direct training on target aerial datasets. Second, the deep learning based methods are compared to the classic stereo matching method, Semi-Global Matching(SGM), and a photogrammetric software, SURE, on the same aerial datasets. Third, transfer learning strategy is introduced to aerial image matching based on the assumption of a few target samples available for model fine tuning. It experimentally proved that the conventional methods and the deep learning based methods performed similarly, and the latter had greater potential to be explored.
Iterative image-domain decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, Tianye; Dong, Xue; Petrongolo, Michael
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less
In, Myung-Ho; Posnansky, Oleg; Speck, Oliver
2016-05-01
To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.
Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy
NASA Astrophysics Data System (ADS)
Rukundo, Olivier
2018-04-01
This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.
Novel methods for estimating 3D distributions of radioactive isotopes in materials
NASA Astrophysics Data System (ADS)
Iwamoto, Y.; Kataoka, J.; Kishimoto, A.; Nishiyama, T.; Taya, T.; Okochi, H.; Ogata, H.; Yamamoto, S.
2016-09-01
In recent years, various gamma-ray visualization techniques, or gamma cameras, have been proposed. These techniques are extremely effective for identifying "hot spots" or regions where radioactive isotopes are accumulated. Examples of such would be nuclear-disaster-affected areas such as Fukushima or the vicinity of nuclear reactors. However, the images acquired with a gamma camera do not include distance information between radioactive isotopes and the camera, and hence are "degenerated" in the direction of the isotopes. Moreover, depth information in the images is lost when the isotopes are embedded in materials, such as water, sand, and concrete. Here, we propose two methods of obtaining depth information of radioactive isotopes embedded in materials by comparing (1) their spectra and (2) images of incident gamma rays scattered by the materials and direct gamma rays. In the first method, the spectra of radioactive isotopes and the ratios of scattered to direct gamma rays are obtained. We verify experimentally that the ratio increases with increasing depth, as predicted by simulations. Although the method using energy spectra has been studied for a long time, an advantage of our method is the use of low-energy (50-150 keV) photons as scattered gamma rays. In the second method, the spatial extent of images obtained for direct and scattered gamma rays is compared. By performing detailed Monte Carlo simulations using Geant4, we verify that the spatial extent of the position where gamma rays are scattered increases with increasing depth. To demonstrate this, we are developing various gamma cameras to compare low-energy (scattered) gamma-ray images with fully photo-absorbed gamma-ray images. We also demonstrate that the 3D reconstruction of isotopes/hotspots is possible with our proposed methods. These methods have potential applications in the medical fields, and in severe environments such as the nuclear-disaster-affected areas in Fukushima.
SWT voting-based color reduction for text detection in natural scene images
NASA Astrophysics Data System (ADS)
Ikica, Andrej; Peer, Peter
2013-12-01
In this article, we propose a novel stroke width transform (SWT) voting-based color reduction method for detecting text in natural scene images. Unlike other text detection approaches that mostly rely on either text structure or color, the proposed method combines both by supervising text-oriented color reduction process with additional SWT information. SWT pixels mapped to color space vote in favor of the color they correspond to. Colors receiving high SWT vote most likely belong to text areas and are blocked from being mean-shifted away. Literature does not explicitly address SWT search direction issue; thus, we propose an adaptive sub-block method for determining correct SWT direction. Both SWT voting-based color reduction and SWT direction determination methods are evaluated on binary (text/non-text) images obtained from a challenging Computer Vision Lab optical character recognition database. SWT voting-based color reduction method outperforms the state-of-the-art text-oriented color reduction approach.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
An ITK framework for deterministic global optimization for medical image registration
NASA Astrophysics Data System (ADS)
Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.
2006-03-01
Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.
Imaging Human Brain Perfusion with Inhaled Hyperpolarized 129Xe MR Imaging.
Rao, Madhwesha R; Stewart, Neil J; Griffiths, Paul D; Norquay, Graham; Wild, Jim M
2018-02-01
Purpose To evaluate the feasibility of directly imaging perfusion of human brain tissue by using magnetic resonance (MR) imaging with inhaled hyperpolarized xenon 129 ( 129 Xe). Materials and Methods In vivo imaging with 129 Xe was performed in three healthy participants. The combination of a high-yield spin-exchange optical pumping 129 Xe polarizer, custom-built radiofrequency coils, and an optimized gradient-echo MR imaging protocol was used to achieve signal sensitivity sufficient to directly image hyperpolarized 129 Xe dissolved in the human brain. Conventional T1-weighted proton (hydrogen 1 [ 1 H]) images and perfusion images by using arterial spin labeling were obtained for comparison. Results Images of 129 Xe uptake were obtained with a signal-to-noise ratio of 31 ± 9 and demonstrated structural similarities to the gray matter distribution on conventional T1-weighted 1 H images and to perfusion images from arterial spin labeling. Conclusion Hyperpolarized 129 Xe MR imaging is an injection-free means of imaging the perfusion of cerebral tissue. The proposed method images the uptake of inhaled xenon gas to the extravascular brain tissue compartment across the intact blood-brain barrier. This level of sensitivity is not readily available with contemporary MR imaging methods. © RSNA, 2017.
NASA Astrophysics Data System (ADS)
Maass, Bolko
2016-12-01
This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.
Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi
2014-01-01
We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.
NASA Astrophysics Data System (ADS)
Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.
2015-03-01
Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.
High throughput analysis of samples in flowing liquid
Ambrose, W. Patrick; Grace, W. Kevin; Goodwin, Peter M.; Jett, James H.; Orden, Alan Van; Keller, Richard A.
2001-01-01
Apparatus and method enable imaging multiple fluorescent sample particles in a single flow channel. A flow channel defines a flow direction for samples in a flow stream and has a viewing plane perpendicular to the flow direction. A laser beam is formed as a ribbon having a width effective to cover the viewing plane. Imaging optics are arranged to view the viewing plane to form an image of the fluorescent sample particles in the flow stream, and a camera records the image formed by the imaging optics.
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
Patient-specific lean body mass can be estimated from limited-coverage computed tomography images.
Devriese, Joke; Beels, Laurence; Maes, Alex; van de Wiele, Christophe; Pottel, Hans
2018-06-01
In PET/CT, quantitative evaluation of tumour metabolic activity is possible through standardized uptake values, usually normalized for body weight (BW) or lean body mass (LBM). Patient-specific LBM can be estimated from whole-body (WB) CT images. As most clinical indications only warrant PET/CT examinations covering head to midthigh, the aim of this study was to develop a simple and reliable method to estimate LBM from limited-coverage (LC) CT images and test its validity. Head-to-toe PET/CT examinations were retrospectively retrieved and semiautomatically segmented into tissue types based on thresholding of CT Hounsfield units. LC was obtained by omitting image slices. Image segmentation was validated on the WB CT examinations by comparing CT-estimated BW with actual BW, and LBM estimated from LC images were compared with LBM estimated from WB images. A direct method and an indirect method were developed and validated on an independent data set. Comparing LBM estimated from LC examinations with estimates from WB examinations (LBMWB) showed a significant but limited bias of 1.2 kg (direct method) and nonsignificant bias of 0.05 kg (indirect method). This study demonstrates that LBM can be estimated from LC CT images with no significant difference from LBMWB.
Securing Digital Images Integrity using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Itahriouan, Zakaria; Ouazzani Jamil, Mohammed
2018-05-01
Digital image signature is a technique used to protect the image integrity. The application of this technique can serve several areas of imaging applied to smart cities. The objective of this work is to propose two methods to protect digital image integrity. We present a description of two approaches using artificial neural networks (ANN) to digitally sign an image. The first one is “Direct Signature without learning” and the second is “Direct Signature with learning”. This paper presents the theory of proposed approaches and an experimental study to test their effectiveness.
Parham, Christopher A; Zhong, Zhong; Pisano, Etta; Connor, Jr., Dean M
2015-03-03
Systems and methods for detecting an image of an object using a multi-beam imaging system from an x-ray beam having a polychromatic energy distribution are disclosed. According to one aspect, a method can include generating a first X-ray beam having a polychromatic energy distribution. Further, the method can include positioning a plurality of monochromator crystals in a predetermined position to directly intercept the first X-ray beam such that a plurality of second X-ray beams having predetermined energy levels are produced. Further, an object can be positioned in the path of the second X-ray beams for transmission of the second X-ray beams through the object and emission from the object as transmitted X-ray beams. The transmitted X-ray beams can each be directed at an angle of incidence upon one or more crystal analyzers. Further, an image of the object can be detected from the beams diffracted from the analyzer crystals.
Directional imaging of the retinal cone mosaic
NASA Astrophysics Data System (ADS)
Vohnsen, Brian; Iglesias, Ignacio; Artal, Pablo
2004-05-01
We describe a near-IR scanning laser ophthalmoscope that allows the retinal cone mosaic to be imaged in the human eye in vivo without the use of wave-front correction techniques. The method takes advantage of the highly directional quality of cone photoreceptors that permits efficient coupling of light to individual cones and subsequent detection of most directional components of the backscattered light produced by the light-guiding effect of the cones. We discuss details of the system and describe cone-mosaic images obtained under different conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Presles, Benoît, E-mail: benoit.presles@creatis.insa-lyon.fr; Rit, Simon; Sarrut, David
2014-12-15
Purpose: The aim of the present work is to propose and evaluate registration algorithms of three-dimensional (3D) transabdominal (TA) ultrasound (US) images to setup postprostatectomy patients during radiation therapy. Methods: Three registration methods have been developed and evaluated to register a reference 3D-TA-US image acquired during the planning CT session and a 3D-TA-US image acquired before each treatment session. The first method (method A) uses only gray value information, whereas the second one (method B) uses only gradient information. The third one (method C) combines both sets of information. All methods restrict the comparison to a region of interest computedmore » from the dilated reference positioning volume drawn on the reference image and use mutual information as a similarity measure. The considered geometric transformations are translations and have been optimized by using the adaptive stochastic gradient descent algorithm. Validation has been carried out using manual registration by three operators of the same set of image pairs as the algorithms. Sixty-two treatment US images of seven patients irradiated after a prostatectomy have been registered to their corresponding reference US image. The reference registration has been defined as the average of the manual registration values. Registration error has been calculated by subtracting the reference registration from the algorithm result. For each session, the method has been considered a failure if the registration error was above both the interoperator variability of the session and a global threshold of 3.0 mm. Results: All proposed registration algorithms have no systematic bias. Method B leads to the best results with mean errors of −0.6, 0.7, and −0.2 mm in left–right (LR), superior–inferior (SI), and anterior–posterior (AP) directions, respectively. With this method, the standard deviations of the mean error are of 1.7, 2.4, and 2.6 mm in LR, SI, and AP directions, respectively. The latter are inferior to the interoperator registration variabilities which are of 2.5, 2.5, and 3.5 mm in LR, SI, and AP directions, respectively. Failures occur in 5%, 18%, and 10% of cases in LR, SI, and AP directions, respectively. 69% of the sessions have no failure. Conclusions: Results of the best proposed registration algorithm of 3D-TA-US images for postprostatectomy treatment have no bias and are in the same variability range as manual registration. As the algorithm requires a short computation time, it could be used in clinical practice provided that a visual review is performed.« less
Chew, Avenell L.; Sampson, Danuta M.; Kashani, Irwin; Chen, Fred K.
2017-01-01
Purpose We compared cone density measurements derived from the center of gaze-directed single images with reconstructed wide-field montages using the rtx1 adaptive optics (AO) retinal camera. Methods A total of 29 eyes from 29 healthy subjects were imaged with the rtx1 camera. Of 20 overlapping AO images acquired, 12 (at 3.2°, 5°, and 7°) were used for calculating gaze-directed cone densities. Wide-field AO montages were reconstructed and cone densities were measured at the corresponding 12 loci as determined by field projection relative to the foveal center aligned to the foveal dip on optical coherence tomography. Limits of agreement in cone density measurement between single AO images and wide-field AO montages were calculated. Results Cone density measurements failed in 1 or more gaze directions or retinal loci in up to 58% and 33% of the subjects using single AO images or wide-field AO montage, respectively. Although there were no significant overall differences between cone densities derived from single AO images and wide-field AO montages at any of the 12 gazes and locations (P = 0.01–0.65), the limits of agreement between the two methods ranged from as narrow as −2200 to +2600, to as wide as −4200 to +3800 cones/mm2. Conclusions Cone density measurement using the rtx1 AO camera is feasible using both methods. Local variation in image quality and altered visibility of cones after generating montages may contribute to the discrepancies. Translational Relevance Cone densities from single AO images are not interchangeable with wide-field montage derived–measurements. PMID:29285417
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
The Extraction of Terrace in the Loess Plateau Based on radial method
NASA Astrophysics Data System (ADS)
Liu, W.; Li, F.
2016-12-01
The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.
An atlas-based multimodal registration method for 2D images with discrepancy structures.
Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng
2018-06-04
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasahara, M; Arimura, H; Hirose, T
Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less
System and method for magnetic current density imaging at ultra low magnetic fields
Espy, Michelle A.; George, John Stevens; Kraus, Robert Henry; Magnelind, Per; Matlashov, Andrei Nikolaevich; Tucker, Don; Turovets, Sergei; Volegov, Petr Lvovich
2016-02-09
Preferred systems can include an electrical impedance tomography apparatus electrically connectable to an object; an ultra low field magnetic resonance imaging apparatus including a plurality of field directions and disposable about the object; a controller connected to the ultra low field magnetic resonance imaging apparatus and configured to implement a sequencing of one or more ultra low magnetic fields substantially along one or more of the plurality of field directions; and a display connected to the controller, and wherein the controller is further configured to reconstruct a displayable image of an electrical current density in the object. Preferred methods, apparatuses, and computer program products are also disclosed.
Dindaroğlu, Furkan; Kutlu, Pınar; Duran, Gökhan Serhat; Görgülü, Serkan; Aslan, Erhan
2016-05-01
To evaluate the accuracy of three-dimensional (3D) stereophotogrammetry by comparing it with the direct anthropometry and digital photogrammetry methods. The reliability of 3D stereophotogrammetry was also examined. Six profile and four frontal parameters were directly measured on the faces of 80 participants. The same measurements were repeated using two-dimensional (2D) photogrammetry and 3D stereophotogrammetry (3dMDflex System, 3dMD, Atlanta, Ga) to obtain images of the subjects. Another observer made the same measurements for images obtained with 3D stereophotogrammetry, and interobserver reproducibility was evaluated for 3D images. Both observers remeasured the 3D images 1 month later, and intraobserver reproducibility was evaluated. Statistical analysis was conducted using the paired samples t-test, intraclass correlation coefficient, and Bland-Altman limits of agreement. The highest mean difference was 0.30 mm between direct measurement and photogrammetry, 0.21 mm between direct measurement and 3D stereophotogrammetry, and 0.5 mm between photogrammetry and 3D stereophotogrammetry. The lowest agreement value was 0.965 in the Sn-Pro parameter between the photogrammetry and 3D stereophotogrammetry methods. Agreement between the two observers varied from 0.90 (Ch-Ch) to 0.99 (Sn-Me) in linear measurements. For intraobserver agreement, the highest difference between means was 0.33 for observer 1 and 1.42 mm for observer 2. Measurements obtained using 3D stereophotogrammetry indicate that it may be an accurate and reliable imaging method for use in orthodontics.
[Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].
Jin, Yufei; Ma, Meng; Yang, Xin
2016-04-01
Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Special raster scanning for reduction of charging effects in scanning electron microscopy.
Suzuki, Kazuhiko; Oho, Eisaku
2014-01-01
A special raster scanning (SRS) method for reduction of charging effects is developed for the field of SEM. Both a conventional fast scan (horizontal direction) and an unusual scan (vertical direction) are adopted for acquiring raw data consisting of many sub-images. These data are converted to a proper SEM image using digital image processing techniques. About sharpness of the image and reduction of charging effects, the SRS is compared with the conventional fast scan (with frame-averaging) and the conventional slow scan. Experimental results show the effectiveness of SRS images. By a successful combination of the proposed scanning method and low accelerating voltage (LV)-SEMs, it is expected that higher-quality SEM images can be more easily acquired by the considerable reduction of charging effects, while maintaining the resolution. © 2013 Wiley Periodicals, Inc.
Fast digital zooming system using directionally adaptive image interpolation and restoration.
Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki
2014-01-01
This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.
First Images from the Focusing Optics X-Ray Solar Imager
NASA Astrophysics Data System (ADS)
Krucker, Säm; Christe, Steven; Glesener, Lindsay; Ishikawa, Shin-nosuke; Ramsey, Brian; Takahashi, Tadayuki; Watanabe, Shin; Saito, Shinya; Gubarev, Mikhail; Kilaru, Kiranmayee; Tajima, Hiroyasu; Tanaka, Takaaki; Turin, Paul; McBride, Stephen; Glaser, David; Fermin, Jose; White, Stephen; Lin, Robert
2014-10-01
The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload flew for the first time on 2012 November 2, producing the first focused images of the Sun above 5 keV. To enable hard X-ray (HXR) imaging spectroscopy via direct focusing, FOXSI makes use of grazing-incidence replicated optics combined with fine-pitch solid-state detectors. On its first flight, FOXSI observed several targets that included active regions, the quiet Sun, and a GOES-class B2.7 microflare. This Letter provides an introduction to the FOXSI instrument and presents its first solar image. These data demonstrate the superiority in sensitivity and dynamic range that is achievable with a direct HXR imager with respect to previous, indirect imaging methods, and illustrate the technological readiness for a spaceborne mission to observe HXRs from solar flares via direct focusing optics.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2015-11-24
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2016-10-25
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2016-11-22
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Microscopy imaging device with advanced imaging properties
Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei
2017-04-25
Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.
Development of Uav Photogrammetry Method by Using Small Number of Vertical Images
NASA Astrophysics Data System (ADS)
Kunii, Y.
2018-05-01
This new and efficient photogrammetric method for unmanned aerial vehicles (UAVs) requires only a few images taken in the vertical direction at different altitudes. The method includes an original relative orientation procedure which can be applied to images captured along the vertical direction. The final orientation determines the absolute orientation for every parameter and is used for calculating the 3D coordinates of every measurement point. The measurement accuracy was checked at the UAV test site of the Japan Society for Photogrammetry and Remote Sensing. Five vertical images were taken at 70 to 90 m altitude. The 3D coordinates of the measurement points were calculated. The plane and height accuracies were ±0.093 m and ±0.166 m, respectively. These values are of higher accuracy than the results of the traditional photogrammetric method. The proposed method can measure 3D positions efficiently and would be a useful tool for construction and disaster sites and for other field surveying purposes.
Parham, Christopher; Zhong, Zhong; Pisano, Etta; Connor, Dean; Chapman, Leroy D.
2010-06-22
Systems and methods for detecting an image of an object using an X-ray beam having a polychromatic energy distribution are disclosed. According to one aspect, a method can include detecting an image of an object. The method can include generating a first X-ray beam having a polychromatic energy distribution. Further, the method can include positioning a single monochromator crystal in a predetermined position to directly intercept the first X-ray beam such that a second X-ray beam having a predetermined energy level is produced. Further, an object can be positioned in the path of the second X-ray beam for transmission of the second X-ray beam through the object and emission from the object as a transmitted X-ray beam. The transmitted X-ray beam can be directed at an angle of incidence upon a crystal analyzer. Further, an image of the object can be detected from a beam diffracted from the analyzer crystal.
Method for observing phase objects without halos and directional shadows
NASA Astrophysics Data System (ADS)
Suzuki, Yoshimasa; Kajitani, Kazuo; Ohde, Hisashi
2015-03-01
A new microscopy method for observing phase objects without halos and directional shadows is proposed. The key optical element is an annular aperture at the front focal plane of a condenser with a larger diameter than those used in standard phase contrast microscopy. The light flux passing through the annular aperture is changed by the specimen's surface profile and then passes through an objective and contributes to image formation. This paper presents essential conditions for realizing the method. In this paper, images of colonies formed by induced pluripotent stem (iPS) cells using this method are compared with the conventional phase contrast method and the bright-field method when the NA of the illumination is small to identify differences among these techniques. The outlines of the iPS cells are clearly visible with this method, whereas they are not clearly visible due to halos when using the phase contrast method or due to weak contrast when using the bright-field method. Other images using this method are also presented to demonstrate a capacity of this method: a mouse ovum and superimposition of several different images of mouse iPS cells.
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Wegleitner, Eric J.; Isermann, Daniel A.
2017-01-01
Many biologists use digital images for estimating ages of fish, but the use of images could lead to differences in age estimates and precision because image capture can produce changes in light and clarity compared to directly viewing structures through a microscope. We used sectioned sagittal otoliths from 132 Largemouth Bass Micropterus salmoides and sectioned dorsal spines and otoliths from 157 Walleyes Sander vitreus to determine whether age estimates and among‐reader precision were similar when annuli were enumerated directly through a microscope or from digital images. Agreement of ages between viewing methods for three readers were highest for Largemouth Bass otoliths (75–89% among readers), followed by Walleye otoliths (63–70%) and Walleye dorsal spines (47–64%). Most discrepancies (72–96%) were ±1 year, and differences were more prevalent for age‐5 and older fish. With few exceptions, mean ages estimated from digital images were similar to ages estimated via directly viewing the structures through the microscope, and among‐reader precision did not vary between viewing methods for each structure. However, the number of disagreements we observed suggests that biologists should assess potential differences in age structure that could arise if images of calcified structures are used in the age estimation process.
NASA Astrophysics Data System (ADS)
Yu, Qifeng; Liu, Xiaolin; Sun, Xiangyi
1998-07-01
Generalized spin filters, including several directional filters such as the directional median filter and the directional binary filter, are proposed for removal of the noise of fringe patterns and the extraction of fringe skeletons with the help of fringe-orientation maps (FOM s). The generalized spin filters can filter off noise on fringe patterns and binary fringe patterns efficiently, without distortion of fringe features. A quadrantal angle filter is developed to filter off the FOM. With these new filters, the derivative-sign binary image (DSBI) method for extraction of fringe skeletons is improved considerably. The improved DSBI method can extract high-density skeletons as well as common density skeletons.
Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
ERIC Educational Resources Information Center
Carlson, Aaron M.; McPhail, Ellen D.; Rodriguez, Vilmarie; Schroeder, Georgene; Wolanskyj, Alexandra P.
2014-01-01
Instruction in hematopathology at Mayo Medical School has evolved from instructor-guided direct inspection under the light microscope (laboratory method), to photomicrographs of glass slides with classroom projection (projection method). These methods have not been compared directly to date. Forty-one second-year medical students participated in…
Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin
2014-01-01
Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.
Pesteie, Mehran; Abolmaesumi, Purang; Ashab, Hussam Al-Deen; Lessoway, Victoria A; Massey, Simon; Gunka, Vit; Rohling, Robert N
2015-06-01
Injection therapy is a commonly used solution for back pain management. This procedure typically involves percutaneous insertion of a needle between or around the vertebrae, to deliver anesthetics near nerve bundles. Most frequently, spinal injections are performed either blindly using palpation or under the guidance of fluoroscopy or computed tomography. Recently, due to the drawbacks of the ionizing radiation of such imaging modalities, there has been a growing interest in using ultrasound imaging as an alternative. However, the complex spinal anatomy with different wave-like structures, affected by speckle noise, makes the accurate identification of the appropriate injection plane difficult. The aim of this study was to propose an automated system that can identify the optimal plane for epidural steroid injections and facet joint injections. A multi-scale and multi-directional feature extraction system to provide automated identification of the appropriate plane is proposed. Local Hadamard coefficients are obtained using the sequency-ordered Hadamard transform at multiple scales. Directional features are extracted from local coefficients which correspond to different regions in the ultrasound images. An artificial neural network is trained based on the local directional Hadamard features for classification. The proposed method yields distinctive features for classification which successfully classified 1032 images out of 1090 for epidural steroid injection and 990 images out of 1052 for facet joint injection. In order to validate the proposed method, a leave-one-out cross-validation was performed. The average classification accuracy for leave-one-out validation was 94 % for epidural and 90 % for facet joint targets. Also, the feature extraction time for the proposed method was 20 ms for a native 2D ultrasound image. A real-time machine learning system based on the local directional Hadamard features extracted by the sequency-ordered Hadamard transform for detecting the laminae and facet joints in ultrasound images has been proposed. The system has the potential to assist the anesthesiologists in quickly finding the target plane for epidural steroid injections and facet joint injections.
Super-resolution for asymmetric resolution of FIB-SEM 3D imaging using AI with deep learning.
Hagita, Katsumi; Higuchi, Takeshi; Jinnai, Hiroshi
2018-04-12
Scanning electron microscopy equipped with a focused ion beam (FIB-SEM) is a promising three-dimensional (3D) imaging technique for nano- and meso-scale morphologies. In FIB-SEM, the specimen surface is stripped by an ion beam and imaged by an SEM installed orthogonally to the FIB. The lateral resolution is governed by the SEM, while the depth resolution, i.e., the FIB milling direction, is determined by the thickness of the stripped thin layer. In most cases, the lateral resolution is superior to the depth resolution; hence, asymmetric resolution is generated in the 3D image. Here, we propose a new approach based on an image-processing or deep-learning-based method for super-resolution of 3D images with such asymmetric resolution, so as to restore the depth resolution to achieve symmetric resolution. The deep-learning-based method learns from high-resolution sub-images obtained via SEM and recovers low-resolution sub-images parallel to the FIB milling direction. The 3D morphologies of polymeric nano-composites are used as test images, which are subjected to the deep-learning-based method as well as conventional methods. We find that the former yields superior restoration, particularly as the asymmetric resolution is increased. Our super-resolution approach for images having asymmetric resolution enables observation time reduction.
Chen, Y-J; Chen, S-K; Huang, H-W; Yao, C-C; Chang, H-F
2004-09-01
To compare the cephalometric landmark identification on softcopy and hardcopy of direct digital cephalography acquired by a storage-phosphor (SP) imaging system. Ten digital cephalograms and their conventional counterpart, hardcopy on a transparent blue film, were obtained by a SP imaging system and a dye sublimation printer. Twelve orthodontic residents identified 19 cephalometric landmarks on monitor-displayed SP digital images with computer-aided method and on their hardcopies with conventional method. The x- and y-coordinates for each landmark, indicating the horizontal and vertical positions, were analysed to assess the reliability of landmark identification and evaluate the concordance of the landmark locations in softcopy and hardcopy of SP digital cephalometric radiography. For each of the 19 landmarks, the location differences as well as the horizontal and vertical components were statistically significant between SP digital cephalometric radiography and its hardcopy. Smaller interobserver errors on SP digital images than those on their hardcopies were noted for all the landmarks, except point Go in vertical direction. The scatter-plots demonstrate the characteristic distribution of the interobserver error in both horizontal and vertical directions. Generally, the dispersion of interobserver error on SP digital cephalometric radiography is less than that on its hardcopy with conventional method. The SP digital cephalometric radiography could yield better or comparable level of performance in landmark identification as its hardcopy, except point Go in vertical direction.
Tracey, Matthew P; Pham, Dianne; Koide, Kazunori
2015-07-21
Neither palladium nor platinum is an endogenous biological metal. Imaging palladium in biological samples, however, is becoming increasingly important because bioorthogonal organometallic chemistry involves palladium catalysis. In addition to being an imaging target, palladium has been used to fluorometrically image biomolecules. In these cases, palladium species are used as imaging-enabling reagents. This review article discusses these fluorometric methods. Platinum-based drugs are widely used as anticancer drugs, yet their mechanism of action remains largely unknown. We discuss fluorometric methods for imaging or quantifying platinum in cells or biofluids. These methods include the use of chemosensors to directly detect platinum, fluorescently tagging platinum-based drugs, and utilizing post-labeling to elucidate distribution and mode of action.
Ground Subsidence Along Shanghai Metro Line 6 BY PS-InSAR Method
NASA Astrophysics Data System (ADS)
Wu, J.; Liao, M.; Li, N.
2018-04-01
With the rapid development of urban economy, convenient, safe, and efficient urban rail transit has become the preferred method for people to travel. In order to ensure the safety and sustainable development of urban rail transit, the PS-InSAR technology with millimeter deformation measurement accuracy has been widely applied to monitor the deformation of urban rail transit. In this paper, 32 scenes of COSMO-SkyMed descending images and 23 scenes of Envisat ASAR images covering the Shanghai Metro Line 6 acquired from 2008 to 2010 are used to estimate the average deformation rate along line-of-sight (LOS) direction by PS-InSAR method. The experimental results show that there are two main subsidence areas along the Shanghai Metro Line 6, which are located between Wuzhou Avenue Station to Wulian Road Station and West Gaoke Road Station to Gaoqing Road Station. Between Wuzhou Avenue Station and Wulian Road Station, the maximum displacement rate in the vertical direction of COSMO-SkyMed images is -9.92 mm/year, and the maximum displacement rate in the vertical direction of Envisat ASAR images is -8.53 mm/year. From the West Gaoke Road Station to the Gaoqing Road Station, the maximum displacement rate in the vertical direction of COSMO-SkyMed images is -15.53 mm/year, and the maximum displacement rate in the vertical direction of Envisat ASAR images is -17.9 mm/year. The results show that the ground deformation rates obtained by two SAR platforms with different wavelengths, different sensors and different incident angles have good consistence with each other, and also that of spirit leveling.
Detection of latent prints by Raman imaging
Lewis, Linda Anne [Andersonville, TN; Connatser, Raynella Magdalene [Knoxville, TN; Lewis, Sr., Samuel Arthur
2011-01-11
The present invention relates to a method for detecting a print on a surface, the method comprising: (a) contacting the print with a Raman surface-enhancing agent to produce a Raman-enhanced print; and (b) detecting the Raman-enhanced print using a Raman spectroscopic method. The invention is particularly directed to the imaging of latent fingerprints.
Optimization-based methods for road image registration
DOT National Transportation Integrated Search
2008-02-01
A number of transportation agencies are now relying on direct imaging for monitoring and cataloguing the state of their roadway systems. Images provide objective information to characterize the pavement as well as roadside hardware. The tasks of proc...
FIRST IMAGES FROM THE FOCUSING OPTICS X-RAY SOLAR IMAGER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krucker, Säm; Glesener, Lindsay; Turin, Paul
2014-10-01
The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload flew for the first time on 2012 November 2, producing the first focused images of the Sun above 5 keV. To enable hard X-ray (HXR) imaging spectroscopy via direct focusing, FOXSI makes use of grazing-incidence replicated optics combined with fine-pitch solid-state detectors. On its first flight, FOXSI observed several targets that included active regions, the quiet Sun, and a GOES-class B2.7 microflare. This Letter provides an introduction to the FOXSI instrument and presents its first solar image. These data demonstrate the superiority in sensitivity and dynamic range that is achievable with amore » direct HXR imager with respect to previous, indirect imaging methods, and illustrate the technological readiness for a spaceborne mission to observe HXRs from solar flares via direct focusing optics.« less
Decision net, directed graph, and neural net processing of imaging spectrometer data
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki; Barnard, Etienne
1989-01-01
A decision-net solution involving a novel hierarchical classifier and a set of multiple directed graphs, as well as a neural-net solution, are respectively presented for large-class problem and mixture problem treatments of imaging spectrometer data. The clustering method for hierarchical classifier design, when used with multiple directed graphs, yields an efficient decision net. New directed-graph rules for reducing local maxima as well as the number of perturbations required, and the new starting-node rules for extending the reachability and reducing the search time of the graphs, are noted to yield superior results, as indicated by an illustrative 500-class imaging spectrometer problem.
Salas, Desirée; Le Gall, Antoine; Fiche, Jean-Bernard; Valeri, Alessandro; Ke, Yonggang; Bron, Patrick; Bellot, Gaetan
2017-01-01
Superresolution light microscopy allows the imaging of labeled supramolecular assemblies at a resolution surpassing the classical diffraction limit. A serious limitation of the superresolution approach is sample heterogeneity and the stochastic character of the labeling procedure. To increase the reproducibility and the resolution of the superresolution results, we apply multivariate statistical analysis methods and 3D reconstruction approaches originally developed for cryogenic electron microscopy of single particles. These methods allow for the reference-free 3D reconstruction of nanomolecular structures from two-dimensional superresolution projection images. Since these 2D projection images all show the structure in high-resolution directions of the optical microscope, the resulting 3D reconstructions have the best possible isotropic resolution in all directions. PMID:28811371
Data Images and Other Graphical Displays for Directional Data
NASA Technical Reports Server (NTRS)
Morphet, Bill; Symanzik, Juergen
2005-01-01
Vectors, axes, and periodic phenomena have direction. Directional variation can be expressed as points on a unit circle and is the subject of circular statistics, a relatively new application of statistics. An overview of existing methods for the display of directional data is given. The data image for linear variables is reviewed, then extended to directional variables by displaying direction using a color scale composed of a sequence of four or more color gradients with continuity between sequences and ordered intuitively in a color wheel such that the color of the 0deg angle is the same as the color of the 360deg angle. Cross over, which arose in automating the summarization of historical wind data, and color discontinuity resulting from the use a single color gradient in computational fluid dynamics visualization are eliminated. The new method provides for simultaneous resolution of detail on a small scale and overall structure on a large scale. Example circular data images are given of a global view of average wind direction of El Nino periods, computed rocket motor internal combustion flow, a global view of direction of the horizontal component of earth's main magnetic field on 9/15/2004, and Space Shuttle solid rocket motor nozzle vectoring.
NASA Astrophysics Data System (ADS)
Guan, Jinge; Ren, Wei; Cheng, Yaoyu
2018-04-01
We demonstrate an efficient polarization-difference imaging system in turbid conditions by using the Stokes vector of light. The interaction of scattered light with the polarizer is analyzed by the Stokes-Mueller formalism. An interpolation method is proposed to replace the mechanical rotation of the polarization axis of the analyzer theoretically, and its performance is verified by the experiment at different turbidity levels. We show that compared with direct imaging, the Stokes vector based imaging method can effectively reduce the effect of light scattering and enhance the image contrast.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Directional filtering for block recovery using wavelet features
NASA Astrophysics Data System (ADS)
Hyun, Seung H.; Eom, Il K.; Kim, Yoo S.
2005-07-01
When images compressed with block-based compression techniques are transmitted over a noisy channel, unexpected block losses occur. Conventional methods that do not consider edge directions can cause blocked blurring artifacts. In this paper, we present a post-processing-based block recovery scheme using Haar wavelet features. The adaptive selection of neighboring blocks is performed based on the energy of wavelet subbands (EWS) and difference between DC values (DDC). The lost blocks are recovered by linear interpolation in the spatial domain using selected blocks. The method using only EWS performs well for horizontal and vertical edges, but not as well for diagonal edges. Conversely, only using DDC performs well for diagonal edges with the exception of line- or roof-type edge profiles. Therefore, we combine EWS and DDC for better results. The proposed directional recovery method is effective for the strong edge because exploit the varying neighboring blocks adaptively according to the edges and the directional information in the image. The proposed method outperforms the previous methods that used only fixed blocks.
Pseudo-color coding method for high-dynamic single-polarization SAR images
NASA Astrophysics Data System (ADS)
Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi
2018-04-01
A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.
Electromagnetic Imaging Methods for Nondestructive Evaluation Applications
Deng, Yiming; Liu, Xin
2011-01-01
Electromagnetic nondestructive tests are important and widely used within the field of nondestructive evaluation (NDE). The recent advances in sensing technology, hardware and software development dedicated to imaging and image processing, and material sciences have greatly expanded the application fields, sophisticated the systems design and made the potential of electromagnetic NDE imaging seemingly unlimited. This review provides a comprehensive summary of research works on electromagnetic imaging methods for NDE applications, followed by the summary and discussions on future directions. PMID:22247693
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie; Chen, Ting; Tan, Sirui
Imaging fault zones and fractures is crucial for geothermal operators, providing important information for reservoir evaluation and management strategies. However, there are no existing techniques available for directly and clearly imaging fault zones, particularly for steeply dipping faults and fracture zones. In this project, we developed novel acoustic- and elastic-waveform inversion methods for high-resolution velocity model building. In addition, we developed acoustic and elastic reverse-time migration methods for high-resolution subsurface imaging of complex subsurface structures and steeply-dipping fault/fracture zones. We first evaluated and verified the improved capabilities of our newly developed seismic inversion and migration imaging methods using synthetic seismicmore » data. Our numerical tests verified that our new methods directly image subsurface fracture/fault zones using surface seismic reflection data. We then applied our novel seismic inversion and migration imaging methods to a field 3D surface seismic dataset acquired at the Soda Lake geothermal field using Vibroseis sources. Our migration images of the Soda Lake geothermal field obtained using our seismic inversion and migration imaging algorithms revealed several possible fault/fracture zones. AltaRock Energy, Inc. is working with Cyrq Energy, Inc. to refine the geologic interpretation at the Soda Lake geothermal field. Trenton Cladouhos, Senior Vice President R&D of AltaRock, was very interested in our imaging results of 3D surface seismic data from the Soda Lake geothermal field. He planed to perform detailed interpretation of our images in collaboration with James Faulds and Holly McLachlan of University of Nevada at Reno. Using our high-resolution seismic inversion and migration imaging results can help determine the optimal locations to drill wells for geothermal energy production and reduce the risk of geothermal exploration.« less
Lu, Alex Xijie; Moses, Alan M
2016-01-01
Despite the importance of characterizing genes that exhibit subcellular localization changes between conditions in proteome-wide imaging experiments, many recent studies still rely upon manual evaluation to assess the results of high-throughput imaging experiments. We describe and demonstrate an unsupervised k-nearest neighbours method for the detection of localization changes. Compared to previous classification-based supervised change detection methods, our method is much simpler and faster, and operates directly on the feature space to overcome limitations in needing to manually curate training sets that may not generalize well between screens. In addition, the output of our method is flexible in its utility, generating both a quantitatively ranked list of localization changes that permit user-defined cut-offs, and a vector for each gene describing feature-wise direction and magnitude of localization changes. We demonstrate that our method is effective at the detection of localization changes using the Δrpd3 perturbation in Saccharomyces cerevisiae, where we capture 71.4% of previously known changes within the top 10% of ranked genes, and find at least four new localization changes within the top 1% of ranked genes. The results of our analysis indicate that simple unsupervised methods may be able to identify localization changes in images without laborious manual image labelling steps.
Van Steenkiste, Gwendolyn; Jeurissen, Ben; Veraart, Jelle; den Dekker, Arnold J; Parizel, Paul M; Poot, Dirk H J; Sijbers, Jan
2016-01-01
Diffusion MRI is hampered by long acquisition times, low spatial resolution, and a low signal-to-noise ratio. Recently, methods have been proposed to improve the trade-off between spatial resolution, signal-to-noise ratio, and acquisition time of diffusion-weighted images via super-resolution reconstruction (SRR) techniques. However, during the reconstruction, these SRR methods neglect the q-space relation between the different diffusion-weighted images. An SRR method that includes a diffusion model and directly reconstructs high resolution diffusion parameters from a set of low resolution diffusion-weighted images was proposed. Our method allows an arbitrary combination of diffusion gradient directions and slice orientations for the low resolution diffusion-weighted images, optimally samples the q- and k-space, and performs motion correction with b-matrix rotation. Experiments with synthetic data and in vivo human brain data show an increase of spatial resolution of the diffusion parameters, while preserving a high signal-to-noise ratio and low scan time. Moreover, the proposed SRR method outperforms the previous methods in terms of the root-mean-square error. The proposed SRR method substantially increases the spatial resolution of MRI that can be obtained in a clinically feasible scan time. © 2015 Wiley Periodicals, Inc.
Direct magnetic field estimation based on echo planar raw data.
Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim
2010-07-01
Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.
Hole filling with oriented sticks in ultrasound volume reconstruction
Vaughan, Thomas; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor
2015-01-01
Abstract. Volumes reconstructed from tracked planar ultrasound images often contain regions where no information was recorded. Existing interpolation methods introduce image artifacts and tend to be slow in filling large missing regions. Our goal was to develop a computationally efficient method that fills missing regions while adequately preserving image features. We use directional sticks to interpolate between pairs of known opposing voxels in nearby images. We tested our method on 30 volumetric ultrasound scans acquired from human subjects, and compared its performance to that of other published hole-filling methods. Reconstruction accuracy, fidelity, and time were improved compared with other methods. PMID:26839907
Imaging systems and methods for obtaining and using biometric information
McMakin, Douglas L [Richland, WA; Kennedy, Mike O [Richland, WA
2010-11-30
Disclosed herein are exemplary embodiments of imaging systems and methods of using such systems. In one exemplary embodiment, one or more direct images of the body of a clothed subject are received, and a motion signature is determined from the one or more images. In this embodiment, the one or more images show movement of the body of the subject over time, and the motion signature is associated with the movement of the subject's body. In certain implementations, the subject can be identified based at least in part on the motion signature. Imaging systems for performing any of the disclosed methods are also disclosed herein. Furthermore, the disclosed imaging, rendering, and analysis methods can be implemented, at least in part, as one or more computer-readable media comprising computer-executable instructions for causing a computer to perform the respective methods.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
An automated and universal method for measuring mean grain size from a digital image of sediment
Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.
2010-01-01
Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
Image correlation based method for the analysis of collagen fibers patterns
NASA Astrophysics Data System (ADS)
Rosa, Ramon G. T.; Pratavieira, Sebastião.; Kurachi, Cristina
2015-06-01
The collagen fibers are one of the most important structural proteins in skin, being responsible for its strength and flexibility. It is known that their properties, like fibers density, ordination and mean diameter can be affected by several skin conditions, what makes these properties a good parameter to be used on the diagnosis and evaluation of skin aging, cancer, healing, among other conditions. There is, however, a need for methods capable of analyzing quantitatively the organization patterns of these fibers. To address this need, we developed a method based on the autocorrelation function of the images that allows the construction of vector field plots of the fibers directions and does not require any kind of curve fitting or optimization. The analyzed images were obtained through Second Harmonic Generation Imaging Microscopy. This paper presents a concise review on the autocorrelation function and some of its applications to image processing, details the developed method and the results obtained through the analysis of hystopathological slides of landrace porcine skin. The method has high accuracy on the determination of the fibers direction and presents high performance. We look forward to perform further studies keeping track of different skin conditions over time.
Convolutional neural network features based change detection in satellite images
NASA Astrophysics Data System (ADS)
Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong
2016-07-01
With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.
NASA Astrophysics Data System (ADS)
Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2015-12-01
In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2018-04-01
Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.
Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung
2018-01-01
In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447
Study on Hybrid Image Search Technology Based on Texts and Contents
NASA Astrophysics Data System (ADS)
Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.
2018-05-01
Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
Simulation of digital mammography images
NASA Astrophysics Data System (ADS)
Workman, Adam
2005-04-01
A number of different technologies are available for digital mammography. However, it is not clear how differences in the physical performance aspects of the different imaging technologies affect clinical performance. Randomised controlled trials provide a means of gaining information on clinical performance however do not provide direct comparison of the different digital imaging technologies. This work describes a method of simulating the performance of different digital mammography systems. The method involves modifying the imaging performance parameters of images from a small field of view (SFDM), high resolution digital imaging system used for spot imaging. Under normal operating conditions this system produces images with higher signal-to-noise ratio (SNR) over a wide spatial frequency range than current full field digital mammography (FFDM) systems. The SFDM images can be 'degraded" by computer processing to simulate the characteristics of a FFDM system. Initial work characterised the physical performance (MTF, NPS) of the SFDM detector and developed a model and method for simulating signal transfer and noise properties of a FFDM system. It was found that the SNR properties of the simulated FFDM images were very similar to those measured from an actual FFDM system verifying the methodology used. The application of this technique to clinical images from the small field system will allow the clinical performance of different FFDM systems to be simulated and directly compared using the same clinical image datasets.
NASA Astrophysics Data System (ADS)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Improved automatic adjustment of density and contrast in FCR system using neural network
NASA Astrophysics Data System (ADS)
Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo
1994-05-01
FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.
Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao
2017-05-11
Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.
Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao
2017-01-01
Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures. PMID:28772879
Edge Detection Method Based on Neural Networks for COMS MI Images
NASA Astrophysics Data System (ADS)
Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee
2016-12-01
Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.
NASA Astrophysics Data System (ADS)
Sugiura, M.; Seika, M.
1994-02-01
In this study, a new technique to measure the density of slip-bands automatically is developed, namely, a TV image of the slip-bands observed through a microscope is directly processed by an image-processing system using a personal computer and an accurate value of the density of slip-bands is measured quickly. In the case of measuring the local stresses in machine parts of large size with the copper plating foil, the direct observation of slip-bands through an optical microscope is difficult. In this study, to facilitate a technique close to the direct microscopic observation of slip-bands in the foil attached to a large-sized specimen, the replica method using a platic film of acetyl cellulose is applied to replicate the slip-bands in the attached foil.
Indirect and direct methods for measuring a dynamic throat diameter in a solid rocket motor
NASA Astrophysics Data System (ADS)
Colbaugh, Lauren
In a solid rocket motor, nozzle throat erosion is dictated by propellant composition, throat material properties, and operating conditions. Throat erosion has a significant effect on motor performance, so it must be accurately characterized to produce a good motor design. In order to correlate throat erosion rate to other parameters, it is first necessary to know what the throat diameter is throughout a motor burn. Thus, an indirect method and a direct method for determining throat diameter in a solid rocket motor are investigated in this thesis. The indirect method looks at the use of pressure and thrust data to solve for throat diameter as a function of time. The indirect method's proof of concept was shown by the good agreement between the ballistics model and the test data from a static motor firing. The ballistics model was within 10% of all measured and calculated performance parameters (e.g. average pressure, specific impulse, maximum thrust, etc.) for tests with throat erosion and within 6% of all measured and calculated performance parameters for tests without throat erosion. The direct method involves the use of x-rays to directly observe a simulated nozzle throat erode in a dynamic environment; this is achieved with a dynamic calibration standard. An image processing algorithm is developed for extracting the diameter dimensions from the x-ray intensity digital images. Static and dynamic tests were conducted. The measured diameter was compared to the known diameter in the calibration standard. All dynamic test results were within +6% / -7% of the actual diameter. Part of the edge detection method consists of dividing the entire x-ray image by an average pixel value, calculated from a set of pixels in the x-ray image. It was found that the accuracy of the edge detection method depends upon the selection of the average pixel value area and subsequently the average pixel value. An average pixel value sensitivity analysis is presented. Both the indirect method and the direct method prove to be viable approaches to determining throat diameter during solid rocket motor operation.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Long-term Live-cell Imaging to Assess Cell Fate in Response to Paclitaxel.
Bolgioni, Amanda F; Vittoria, Marc A; Ganem, Neil J
2018-05-14
Live-cell imaging is a powerful technique that can be used to directly visualize biological phenomena in single cells over extended periods of time. Over the past decade, new and innovative technologies have greatly enhanced the practicality of live-cell imaging. Cells can now be kept in focus and continuously imaged over several days while maintained under 37 °C and 5% CO2 cell culture conditions. Moreover, multiple fields of view representing different experimental conditions can be acquired simultaneously, thus providing high-throughput experimental data. Live-cell imaging provides a significant advantage over fixed-cell imaging by allowing for the direct visualization and temporal quantitation of dynamic cellular events. Live-cell imaging can also identify variation in the behavior of single cells that would otherwise have been missed using population-based assays. Here, we describe live-cell imaging protocols to assess cell fate decisions following treatment with the anti-mitotic drug paclitaxel. We demonstrate methods to visualize whether mitotically arrested cells die directly from mitosis or slip back into interphase. We also describe how the fluorescent ubiquitination-based cell cycle indicator (FUCCI) system can be used to assess the fraction of interphase cells born from mitotic slippage that are capable of re-entering the cell cycle. Finally, we describe a live-cell imaging method to identify nuclear envelope rupture events.
Almukhtar, Anas; Khambay, Balvinder; Ayoub, Ashraf; Ju, Xiangyang; Al-Hiyali, Ali; Macdonald, James; Jabar, Norhayati; Goto, Tazuko
2015-01-01
The limitations of the current methods of quantifying the surgical movements of facial bones inspired this study. The aim of this study was the assessment of the accuracy and reproducibility of directly landmarking of 3D DICOM images (Digital Imaging and Communications in Medicine) to quantify the changes in the jaw bones following surgery. The study was carried out on plastic skull to simulate the surgical movements of the jaw bones. Cone beam CT scans were taken at 3mm, 6mm, and 9mm maxillary advancement; together with a 2mm, 4mm, 6mm and 8mm "down graft" which in total generated 12 different positions of the maxilla for the analysis. The movements of the maxilla were calculated using two methods, the standard approach where distances between surface landmarks on the jaw bones were measured and the novel approach where measurements were taken directly from the internal structures of the corresponding 3D DICOME slices. A one sample t-test showed that there was no statistically significant difference between the two methods of measurements for the y and z directions, however, the x direction showed a significant difference. The mean difference between the two absolute measurements were 0.34±0.20mm, 0.22±0.16mm, 0.18±0.13mm in the y, z and x directions respectively. In conclusion, the direct landmarking of 3D DICOM image slices is a reliable, reproducible and informative method for assessment of the 3D skeletal changes. The method has a clear clinical application which includes the analysis of the jaw movements "orthognathic surgery" for the correction of facial deformities.
An efficient classification method based on principal component and sparse representation.
Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang
2016-01-01
As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.
DAVIS: A direct algorithm for velocity-map imaging system
NASA Astrophysics Data System (ADS)
Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.
2018-05-01
In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.
Interferometric Imaging Directly with Closure Phases and Closure Amplitudes
NASA Astrophysics Data System (ADS)
Chael, Andrew A.; Johnson, Michael D.; Bouman, Katherine L.; Blackburn, Lindy L.; Akiyama, Kazunori; Narayan, Ramesh
2018-04-01
Interferometric imaging now achieves angular resolutions as fine as ∼10 μas, probing scales that are inaccessible to single telescopes. Traditional synthesis imaging methods require calibrated visibilities; however, interferometric calibration is challenging, especially at high frequencies. Nevertheless, most studies present only a single image of their data after a process of “self-calibration,” an iterative procedure where the initial image and calibration assumptions can significantly influence the final image. We present a method for efficient interferometric imaging directly using only closure amplitudes and closure phases, which are immune to station-based calibration errors. Closure-only imaging provides results that are as noncommittal as possible and allows for reconstructing an image independently from separate amplitude and phase self-calibration. While closure-only imaging eliminates some image information (e.g., the total image flux density and the image centroid), this information can be recovered through a small number of additional constraints. We demonstrate that closure-only imaging can produce high-fidelity results, even for sparse arrays such as the Event Horizon Telescope, and that the resulting images are independent of the level of systematic amplitude error. We apply closure imaging to VLBA and ALMA data and show that it is capable of matching or exceeding the performance of traditional self-calibration and CLEAN for these data sets.
Using Perturbation Theory to Compute the Morphological Similarity of Diffusion Tensors
Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Royal, Jason; Peterson, Bradley S.
2008-01-01
Computing the morphological similarity of Diffusion Tensors (DTs) at neighboring voxels within a DT image, or at corresponding locations across different DT images, is a fundamental and ubiquitous operation in the post-processing of DT images. The morphological similarity of DTs typically has been computed using either the Principal Directions (PDs) of DTs (i.e., the direction along which water molecules diffuse preferentially) or their tensor elements. Although comparing PDs allows the similarity of one morphological feature of DTs to be visualized directly in eigenspace, this method takes into account only a single eigenvector, and it is therefore sensitive to the presence of noise in the images that can introduce error into the estimation of that vector. Although comparing tensor elements, rather than PDs, is comparatively more robust to the effects of noise, the individual elements of a given tensor do not directly reflect the diffusion properties of water molecules. We propose a measure for computing the morphological similarity of DTs that uses both their eigenvalues and eigenvectors, and that also accounts for the noise levels present in DT images. Our measure presupposes that DTs in a homogeneous region within or across DT images are random perturbations of one another in the presence of noise. The similarity values that are computed using our method are smooth (in the sense that small changes in eigenvalues and eigenvectors cause only small changes in similarity), and they are symmetric when differences in eigenvalues and eigenvectors are also symmetric. In addition, our method does not presuppose that the corresponding eigenvectors across two DTs have been identified accurately, an assumption that is problematic in the presence of noise. Because we compute the similarity between DTs using their eigenspace components, our similarity measure relates directly to both the magnitude and the direction of the diffusion of water molecules. The favorable performance characteristics of our measure offer the prospect of substantially improving additional post-processing operations that are commonly performed on DTI datasets, such as image segmentation, fiber tracking, noise filtering, and spatial normalization. PMID:18450533
Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors
He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan
2017-01-01
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
Forward ultrasonic model validation using wavefield imaging methods
NASA Astrophysics Data System (ADS)
Blackshire, James L.
2018-04-01
The validation of forward ultrasonic wave propagation models in a complex titanium polycrystalline material system is accomplished using wavefield imaging methods. An innovative measurement approach is described that permits the visualization and quantitative evaluation of bulk elastic wave propagation and scattering behaviors in the titanium material for a typical focused immersion ultrasound measurement process. Results are provided for the determination and direct comparison of the ultrasonic beam's focal properties, mode-converted shear wave position and angle, and scattering and reflection from millimeter-sized microtexture regions (MTRs) within the titanium material. The approach and results are important with respect to understanding the root-cause backscatter signal responses generated in aerospace engine materials, where model-assisted methods are being used to understand the probabilistic nature of the backscatter signal content. Wavefield imaging methods are shown to be an effective means for corroborating and validating important forward model predictions in a direct manner using time- and spatially-resolved displacement field amplitude measurements.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Video-based noncooperative iris image segmentation.
Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig
2011-02-01
In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.
Hamilton, S J
2017-05-22
Electrical impedance tomography (EIT) is an emerging imaging modality that uses harmless electrical measurements taken on electrodes at a body's surface to recover information about the internal electrical conductivity and or permittivity. The image reconstruction task of EIT is a highly nonlinear inverse problem that is sensitive to noise and modeling errors making the image reconstruction task challenging. D-bar methods solve the nonlinear problem directly, bypassing the need for detailed and time-intensive forward models, to provide absolute (static) as well as time-difference EIT images. Coupling the D-bar methodology with the inclusion of high confidence a priori data results in a noise-robust regularized image reconstruction method. In this work, the a priori D-bar method for complex admittivities is demonstrated effective on experimental tank data for absolute imaging for the first time. Additionally, the method is adjusted for, and tested on, time-difference imaging scenarios. The ability of the method to be used for conductivity, permittivity, absolute as well as time-difference imaging provides the user with great flexibility without a high computational cost.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-01-01
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179
A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.
Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong
2016-08-19
A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.
NASA Astrophysics Data System (ADS)
Van doninck, Jasper; Tuomisto, Hanna
2017-06-01
Biodiversity mapping in extensive tropical forest areas poses a major challenge for the interpretation of Landsat images, because floristically clearly distinct forest types may show little difference in reflectance. In such cases, the effects of the bidirectional reflection distribution function (BRDF) can be sufficiently strong to cause erroneous image interpretation and classification. Since the opening of the Landsat archive in 2008, several BRDF normalization methods for Landsat have been developed. The simplest of these consist of an empirical view angle normalization, whereas more complex approaches apply the semi-empirical Ross-Li BRDF model and the MODIS MCD43-series of products to normalize directional Landsat reflectance to standard view and solar angles. Here we quantify the effect of surface anisotropy on Landsat TM/ETM+ images over old-growth Amazonian forests, and evaluate five angular normalization approaches. Even for the narrow swath of the Landsat sensors, we observed directional effects in all spectral bands. Those normalization methods that are based on removing the surface reflectance gradient as observed in each image were adequate to normalize TM/ETM+ imagery to nadir viewing, but were less suitable for multitemporal analysis when the solar vector varied strongly among images. Approaches based on the MODIS BRDF model parameters successfully reduced directional effects in the visible bands, but removed only half of the systematic errors in the infrared bands. The best results were obtained when the semi-empirical BRDF model was calibrated using pairs of Landsat observation. This method produces a single set of BRDF parameters, which can then be used to operationally normalize Landsat TM/ETM+ imagery over Amazonian forests to nadir viewing and a standard solar configuration.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Extracting paleo-climate signals from sediment laminae: A new, automated image processing method
NASA Astrophysics Data System (ADS)
Gan, S. Q.; Scholz, C. A.
2010-12-01
Lake sediment laminations commonly represent depositional seasonality in lacustrine environments. Their occurrence and quantitative attributes contain various signals of their depositional environment, limnological conditions and climate. However, the identification and measurement of laminae remains a mainly manual process that is not only tedious and labor intensive, but also subjective and error prone. We present a batch method to identify laminae and extract lamina properties automatically and accurately from sediment core images. Our algorithm is focused on image enhancement that improves the signal-to-noise ratio and maximizes and normalizes image contrast. The unique feature of these algorithms is that they are all direction-sensitive, i.e., the algorithms treat images in the horizontal direction and vertical direction differently and independently. The core process of lamina identification is to use a one-dimensional (1-D) lamina identification algorithm to produce a lamina map, and to use image blob analyses and lamina connectivity analyses to aggregate and smash two-dimensional (2-D) lamina data for the best representation of fine-scale stratigraphy in the sediment profile. The primary output datasets of the system are definitions of laminae and primary color values for each pixel and each lamina in the depth direction; other derived datasets can be retrieved at users’ discretion. Sediment core images from Lake Hitchcock , USA and Lake Bosumtwi, Ghana, were used for algorithm development and testing. As a demonstration of the utility of the software, we processed sediment core images from the top of 50 meters of drill core (representing the past ~100 ky) from Lake Bosumtwi, Ghana.
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.
Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili
2015-12-15
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
A full-parallax 3D display with restricted viewing zone tracking viewer's eye
NASA Astrophysics Data System (ADS)
Beppu, Naoto; Yendo, Tomohiro
2015-03-01
The Three-Dimensional (3D) vision became widely known as familiar imaging technique now. The 3D display has been put into practical use in various fields, such as entertainment and medical fields. Development of 3D display technology will play an important role in a wide range of fields. There are various ways to the method of displaying 3D image. There is one of the methods that showing 3D image method to use the ray reproduction and we focused on it. This method needs many viewpoint images when achieve a full-parallax because this method display different viewpoint image depending on the viewpoint. We proposed to reduce wasteful rays by limiting projector's ray emitted to around only viewer using a spinning mirror, and to increase effectiveness of display device to achieve a full-parallax 3D display. We propose a method by using a tracking viewer's eye, a high-speed projector, a rotating mirror that tracking viewer (a spinning mirror), a concave mirror array having the different vertical slope arranged circumferentially (a concave mirror array), a cylindrical mirror. About proposed method in simulation, we confirmed the scanning range and the locus of the movement in the horizontal direction of the ray. In addition, we confirmed the switching of the viewpoints and convergence performance in the vertical direction of rays. Therefore, we confirmed that it is possible to realize a full-parallax.
NASA Astrophysics Data System (ADS)
Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves
2017-10-01
Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.
Method and apparatus for imaging a sample on a device
Trulson, Mark; Stern, David; Fiekowsky, Peter; Rava, Richard; Walton, Ian; Fodor, Stephen P. A.
1996-01-01
The present invention provides methods and systems for detecting a labeled marker on a sample located on a support. The imaging system comprises a body for immobilizing the support, an excitation radiation source and excitation optics to generate and direct the excitation radiation at the sample. In response, labeled material on the sample emits radiation which has a wavelength that is different from the excitation wavelength, which radiation is collected by collection optics and imaged onto a detector which generates an image of the sample.
Method of obtaining intensified image from developed photographic films and plates
NASA Technical Reports Server (NTRS)
Askins, B. S. (Inventor)
1978-01-01
A method is explained of obtaining intensified images from silver images on developed photographic films and plates. The steps involve converting silver of the developed film or plate to a radioactive compound by treatment with an aqueous alkaline solution of an organo-S35 compound; placing the treated film or plate in direct contact with a receiver film which is then exposed by radiation from the activated film; and developing and fixing the resulting intensified image on the receiver film.
NASA Astrophysics Data System (ADS)
Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro
2015-03-01
This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.
Method of Poisson's ratio imaging within a material part
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1996-01-01
The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to displayed the image.
Effective Clipart Image Vectorization through Direct Optimization of Bezigons.
Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian
2016-02-01
Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.
Hoffman, Matthew P; Taylor, Erik N; Aninwene, George E; Sadayappan, Sakthivel; Gilbert, Richard J
2018-02-01
Contraction of muscular tissue requires the synchronized shortening of myofibers arrayed in complex geometrical patterns. Imaging such myofiber patterns with diffusion-weighted MRI reveals architectural ensembles that underlie force generation at the organ scale. Restricted proton diffusion is a stochastic process resulting from random translational motion that may be used to probe the directionality of myofibers in whole tissue. During diffusion-weighted MRI, magnetic field gradients are applied to determine the directional dependence of proton diffusion through the analysis of a diffusional probability distribution function (PDF). The directions of principal (maximal) diffusion within the PDF are associated with similarly aligned diffusion maxima in adjacent voxels to derive multivoxel tracts. Diffusion-weighted MRI with tractography thus constitutes a multiscale method for depicting patterns of cellular organization within biological tissues. We provide in this review, details of the method by which generalized Q-space imaging is used to interrogate multidimensional diffusion space, and thereby to infer the organization of muscular tissue. Q-space imaging derives the lowest possible angular separation of diffusion maxima by optimizing the conditions by which magnetic field gradients are applied to a given tissue. To illustrate, we present the methods and applications associated with Q-space imaging of the multiscale myoarchitecture associated with the human and rodent tongues. These representations emphasize the intricate and continuous nature of muscle fiber organization and suggest a method to depict structural "blueprints" for skeletal and cardiac muscle tissue. © 2016 Wiley Periodicals, Inc.
An approach for automated analysis of particle holograms
NASA Technical Reports Server (NTRS)
Stanton, A. C.; Caulfield, H. J.; Stewart, G. W.
1984-01-01
A simple method for analyzing droplet holograms is proposed that is readily adaptable to automation using modern image digitizers and analyzers for determination of the number, location, and size distributions of spherical or nearly spherical droplets. The method determines these parameters by finding the spatial location of best focus of the droplet images. With this location known, the particle size may be determined by direct measurement of image area in the focal plane. Particle velocity and trajectory may be determined by comparison of image locations at different instants in time. The method is tested by analyzing digitized images from a reconstructed in-line hologram, and the results show that the method is more accurate than a time-consuming plane-by-plane search for sharpest focus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Sun, B; Li, H
Purpose: The current standard for calculation of photon and electron dose requires conversion of Hounsfield Units (HU) to Electron Density (ED) by applying a calibration curve specifically constructed for the corresponding CT tube voltage. This practice limits the use of the CT scanner to a single tube voltage and hinders the freedom in the selection of optimal tube voltage for better image quality. The objective of this study is to report a prototype CT reconstruction algorithm that provides direct ED images from the raw CT data independently of tube voltages used during acquisition. Methods: A tissue substitute phantom was scannedmore » for Stoichiometric CT calibrations at tube voltages of 70kV, 80kV, 100kV, 120kV and 140kV respectively. HU images and direct ED images were acquired sequentially on a thoracic anthropomorphic phantom at the same tube voltages. Electron densities converted from the HU images were compared to ED obtained from the direct ED images. A 7-field treatment plan was made on all HU and ED images. Gamma analysis was performed to demonstrate quantitatively dosimetric change from the two schemes in acquiring ED. Results: The average deviation of EDs obtained from the direct ED images was −1.5%±2.1% from the EDs from HU images with the corresponding CT calibration curves applied. Gamma analysis on dose calculated on the direct ED images and the HU images acquired at the same tube voltage indicated negligible difference with lowest passing rate at 99.9%. Conclusion: Direct ED images require no CT calibration while demonstrate equivalent dosimetry compared to that obtained from standard HU images. The ability of acquiring direct ED images simplifies the current practice at a safer level by eliminating CT calibration and HU conversion from commissioning and treatment planning respectively. Furthermore, it unlocks a wider range of tube voltages in CT scanner for better imaging quality while maintaining similar dosimetric accuracy.« less
NMR imaging of density distributions in tablets.
Djemai, A; Sinka, I C
2006-08-17
This paper describes the use of (1)H nuclear magnetic resonance (NMR) for 3D mapping of the relative density distribution in pharmaceutical tablets manufactured under controlled conditions. The tablets are impregnated with a compatible liquid. The technique involves imaging of the presence of liquid which occupies the open pore space. The method does not require special calibration as the signal is directly proportional to the porosity for the imaging conditions used. The NMR imaging method is validated using uniform density flat faced tablets and also by direct comparison with X-ray computed tomography. The results illustrate (1) the effect of die wall friction on density distribution by compressing round, curved faced tablets using clean and pre-lubricated tooling, (2) the evolution of density distribution during compaction for both clean and pre-lubricated die wall conditions, by imaging tablets compressed to different compaction forces, and (3) the effect of tablet image on density distribution by compressing two complex shape tablets in identical dies to the same average density using punches with different geometries.
Direct wavefront sensing for high-resolution in vivo imaging in scattering tissue
Wang, Kai; Sun, Wenzhi; Richie, Christopher T.; Harvey, Brandon K.; Betzig, Eric; Ji, Na
2015-01-01
Adaptive optics by direct imaging of the wavefront distortions of a laser-induced guide star has long been used in astronomy, and more recently in microscopy to compensate for aberrations in transparent specimens. Here we extend this approach to tissues that strongly scatter visible light by exploiting the reduced scattering of near-infrared guide stars. The method enables in vivo two-photon morphological and functional imaging down to 700 μm inside the mouse brain. PMID:26073070
NASA Astrophysics Data System (ADS)
Kesiman, Made Windu Antara; Valy, Dona; Burie, Jean-Christophe; Paulus, Erick; Sunarya, I. Made Gede; Hadi, Setiawan; Sok, Kim Heng; Ogier, Jean-Marc
2017-01-01
Due to their specific characteristics, palm leaf manuscripts provide new challenges for text line segmentation tasks in document analysis. We investigated the performance of six text line segmentation methods by conducting comparative experimental studies for the collection of palm leaf manuscript images. The image corpus used in this study comes from the sample images of palm leaf manuscripts of three different Southeast Asian scripts: Balinese script from Bali and Sundanese script from West Java, both from Indonesia, and Khmer script from Cambodia. For the experiments, four text line segmentation methods that work on binary images are tested: the adaptive partial projection line segmentation approach, the A* path planning approach, the shredding method, and our proposed energy function for shredding method. Two other methods that can be directly applied on grayscale images are also investigated: the adaptive local connectivity map method and the seam carving-based method. The evaluation criteria and tool provided by ICDAR2013 Handwriting Segmentation Contest were used in this experiment.
Transvaginal ultrasound (image)
Transvaginal ultrasound is a method of imaging the genital tract in females. A hand held probe is inserted directly ... vaginal cavity to scan the pelvic structures, while ultrasound pictures are viewed on a monitor. The test ...
Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu
2016-06-23
Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.
Blind image fusion for hyperspectral imaging with the directional total variation
NASA Astrophysics Data System (ADS)
Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane
2018-04-01
Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.
NASA Astrophysics Data System (ADS)
Boucharin, Alexis; Oguz, Ipek; Vachet, Clement; Shi, Yundi; Sanchez, Mar; Styner, Martin
2011-03-01
The use of regional connectivity measurements derived from diffusion imaging datasets has become of considerable interest in the neuroimaging community in order to better understand cortical and subcortical white matter connectivity. Current connectivity assessment methods are based on streamline fiber tractography, usually applied in a Monte-Carlo fashion. In this work we present a novel, graph-based method that performs a fully deterministic, efficient and stable connectivity computation. The method handles crossing fibers and deals well with multiple seed regions. The computation is based on a multi-directional graph propagation method applied to sampled orientation distribution function (ODF), which can be computed directly from the original diffusion imaging data. We show early results of our method on synthetic and real datasets. The results illustrate the potential of our method towards subjectspecific connectivity measurements that are performed in an efficient, stable and reproducible manner. Such individual connectivity measurements would be well suited for application in population studies of neuropathology, such as Autism, Huntington's Disease, Multiple Sclerosis or leukodystrophies. The proposed method is generic and could easily be applied to non-diffusion data as long as local directional data can be derived.
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
3D/2D image registration using weighted histogram of gradient directions
NASA Astrophysics Data System (ADS)
Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang
2015-03-01
Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.
NASA Astrophysics Data System (ADS)
Rodrigues, Fabiano S.; de Paula, Eurico R.; Zewdie, Gebreab K.
2017-03-01
We present results of Capon's method for estimation of in-beam images of ionospheric scattering structures observed by a small, low-power coherent backscatter interferometer. The radar interferometer operated in the equatorial site of São Luís, Brazil (2.59° S, 44.21° W, -2.35° dip latitude). We show numerical simulations that evaluate the performance of the Capon method for typical F region measurement conditions. Numerical simulations show that, despite the short baselines of the São Luís radar, the Capon technique is capable of distinguishing localized features with kilometric scale sizes (in the zonal direction) at F region heights. Following the simulations, we applied the Capon algorithm to actual measurements made by the São Luís interferometer during a typical equatorial spread F (ESF) event. As indicated by the simulations, the Capon method produced images that were better resolved than those produced by the Fourier method. The Capon images show narrow (a few kilometers wide) scattering channels associated with ESF plumes and scattering regions spaced by only a few tens of kilometers in the zonal direction. The images are also capable of resolving bifurcations and the C shape of scattering structures.
Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images
Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu
2013-01-01
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856
Direct-Solve Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
2009-01-01
A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-03-05
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.
Computer synthesis of high resolution electron micrographs
NASA Technical Reports Server (NTRS)
Nathan, R.
1976-01-01
Specimen damage, spherical aberration, low contrast and noisy sensors combine to prevent direct atomic viewing in a conventional electron microscope. The paper describes two methods for obtaining ultra-high resolution in biological specimens under the electron microscope. The first method assumes the physical limits of the electron objective lens and uses a series of dark field images of biological crystals to obtain direct information on the phases of the Fourier diffraction maxima; this information is used in an appropriate computer to synthesize a large aperture lens for a 1-A resolution. The second method assumes there is sufficient amplitude scatter from images recorded in focus which can be utilized with a sensitive densitometer and computer contrast stretching to yield fine structure image details. Cancer virus characterization is discussed as an illustrative example. Numerous photographs supplement the text.
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Crack image segmentation based on improved DBC method
NASA Astrophysics Data System (ADS)
Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing
2017-11-01
With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.
Quantifying and visualizing variations in sets of images using continuous linear optimal transport
NASA Astrophysics Data System (ADS)
Kolouri, Soheil; Rohde, Gustavo K.
2014-03-01
Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.
Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft
NASA Astrophysics Data System (ADS)
Nichols, T. W.
Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.
Direct imaging detectors for electron microscopy
NASA Astrophysics Data System (ADS)
Faruqi, A. R.; McMullan, G.
2018-01-01
Electronic detectors used for imaging in electron microscopy are reviewed in this paper. Much of the detector technology is based on the developments in microelectronics, which have allowed the design of direct detectors with fine pixels, fast readout and which are sufficiently radiation hard for practical use. Detectors included in this review are hybrid pixel detectors, monolithic active pixel sensors based on CMOS technology and pnCCDs, which share one important feature: they are all direct imaging detectors, relying on directly converting energy in a semiconductor. Traditional methods of recording images in the electron microscope such as film and CCDs, are mentioned briefly along with a more detailed description of direct electronic detectors. Many applications benefit from the use of direct electron detectors and a few examples are mentioned in the text. In recent years one of the most dramatic advances in structural biology has been in the deployment of the new backthinned CMOS direct detectors to attain near-atomic resolution molecular structures with electron cryo-microscopy (cryo-EM). The development of direct detectors, along with a number of other parallel advances, has seen a very significant amount of new information being recorded in the images, which was not previously possible-and this forms the main emphasis of the review.
NASA Technical Reports Server (NTRS)
Adams, J. B.; Smith, M. O.; Johnson, P. E.
1986-01-01
A Viking Lander 1 image was modeled as mixtures of reflectance spectra of palagonite dust, gray andesitelike rock, and a coarse rocklike soil. The rocks are covered to varying degrees by dust but otherwise appear unweathered. Rocklike soil occurs as lag deposits in deflation zones around stones and on top of a drift and as a layer in a trench dug by the lander. This soil probably is derived from the rocks by wind abrasion and/or spallation. Dust is the major component of the soil and covers most of the surface. The dust is unrelated spectrally to the rock but is equivalent to the global-scale dust observed telescopically. A new method was developed to model a multispectral image as mixtures of end-member spectra and to compare image spectra directly with laboratory reference spectra. The method for the first time uses shade and secondary illumination effects as spectral end-members; thus the effects of topography and illumination on all scales can be isolated or removed. The image was calibrated absolutely from the laboratory spectra, in close agreement with direct calibrations. The method has broad applications to interpreting multispectral images, including satellite images.
Abt, Nicholas B; Lehar, Mohamed; Guajardo, Carolina Trevino; Penninger, Richard T; Ward, Bryan K; Pearl, Monica S; Carey, John P
2016-04-01
Whether the round window membrane (RWM) is permeable to iodine-based contrast agents (IBCA) is unknown; therefore, our goal was to determine if IBCAs could diffuse through the RWM using CT volume acquisition imaging. Imaging of hydrops in the living human ear has attracted recent interest. Intratympanic (IT) injection has shown gadolinium's ability to diffuse through the RWM, enhancing the perilymphatic space. Four unfixed human cadaver temporal bones underwent intratympanic IBCA injection using three sequentially studied methods. The first method was direct IT injection. The second method used direct RWM visualization via tympanomeatal flap for IBCA-soaked absorbable gelatin pledget placement. In the third method, the middle ear was filled with contrast after flap elevation. Volume acquisition CT images were obtained immediately postexposure, and at 1-, 6-, and 24-hour intervals. Postprocessing was accomplished using color ramping and subtraction imaging. After the third method, positive RWM and perilymphatic enhancement were observed with endolymph sparing. Gray scale and color ramp multiplanar reconstructions displayed increased signal within the cochlea compared with precontrast imaging. The cochlea was measured for attenuation differences compared with pure water, revealing a preinjection average of -1,103 HU and a postinjection average of 338 HU. Subtraction imaging shows enhancement remaining within the cochlear space, Eustachian tube, middle ear epithelial lining, and mastoid. Iohexol iodine contrast is able to diffuse across the RWM. Volume acquisition CT imaging was able to detect perilymphatic enhancement at 0.5-mm slice thickness. The clinical application of IBCA IT injection seems promising but requires further safety studies.
Imaging properties and its improvements of scanning/imaging x-ray microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeuchi, Akihisa, E-mail: take@spring8.or.jp; Uesugi, Kentaro; Suzuki, Yoshio
A scanning / imaging X-ray microscope (SIXM) system has been developed at SPring-8. The SIXM consists of a scanning X-ray microscope with a one-dimensional (1D) X-ray focusing device and an imaging (full-field) X-ray microscope with a 1D X-ray objective. The motivation of the SIXM system is to realize a quantitative and highly-sensitive multimodal 3D X-ray tomography by taking advantages of both the scanning X-ray microscope using multi-pixel detector and the imaging X-ray microscope. Data acquisition process of a 2D image is completely different between in the horizontal direction and in the vertical direction; a 1D signal is obtained with themore » linear-scanning while the other dimensional signal is obtained with the imaging optics. Such condition have caused a serious problem on the imaging properties that the imaging quality in the vertical direction has been much worse than that in the horizontal direction. In this paper, two approaches to solve this problem will be presented. One is introducing a Fourier transform method for phase retrieval from one phase derivative image, and the other to develop and employ a 1D diffuser to produce an asymmetrical coherent illumination.« less
A telephoto camera system with shooting direction control by gaze detection
NASA Astrophysics Data System (ADS)
Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro
2015-05-01
For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.
Method and apparatus for atomic imaging
Saldin, Dilano K.; de Andres Rodriquez, Pedro L.
1993-01-01
A method and apparatus for three dimensional imaging of the atomic environment of disordered adsorbate atoms are disclosed. The method includes detecting and measuring the intensity of a diffuse low energy electron diffraction pattern formed by directing a beam of low energy electrons against the surface of a crystal. Data corresponding to reconstructed amplitudes of a wave form is generated by operating on the intensity data. The data corresponding to the reconstructed amplitudes is capable of being displayed as a three dimensional image of an adsorbate atom. The apparatus includes a source of a beam of low energy electrons and a detector for detecting the intensity distribution of a DLEED pattern formed at the detector when the beam of low energy electrons is directed onto the surface of a crystal. A device responsive to the intensity distribution generates a signal corresponding to the distribution which represents a reconstructed amplitude of a wave form and is capable of being converted into a three dimensional image of the atomic environment of an adsorbate atom on the crystal surface.
Greco, Giampaolo; Patel, Anand S.; Lewis, Sara C.; Shi, Wei; Rasul, Rehana; Torosyan, Mary; Erickson, Bradley J.; Hiremath, Atheeth; Moskowitz, Alan J.; Tellis, Wyatt M.; Siegel, Eliot L.; Arenson, Ronald L.; Mendelson, David S.
2015-01-01
Rationale and Objectives Inefficient transfer of personal health records among providers negatively impacts quality of health care and increases cost. This multicenter study evaluates the implementation of the first Internet-based image-sharing system that gives patients ownership and control of their imaging exams, including assessment of patient satisfaction. Materials and Methods Patients receiving any medical imaging exams in four academic centers were eligible to have images uploaded into an online, Internet-based personal health record. Satisfaction surveys were provided during recruitment with questions on ease of use, privacy and security, and timeliness of access to images. Responses were rated on a five-point scale and compared using logistic regression and McNemar's test. Results A total of 2562 patients enrolled from July 2012 to August 2013. The median number of imaging exams uploaded per patient was 5. Most commonly, exams were plain X-rays (34.7%), computed tomography (25.7%), and magnetic resonance imaging (16.1%). Of 502 (19.6%) patient surveys returned, 448 indicated the method of image sharing (Internet, compact discs [CDs], both, other). Nearly all patients (96.5%) responded favorably to having direct access to images, and 78% reported viewing their medical images independently. There was no difference between Internet and CD users in satisfaction with privacy and security and timeliness of access to medical images. A greater percentage of Internet users compared to CD users reported access without difficulty (88.3% vs. 77.5%, P < 0.0001). Conclusion A patient-directed, interoperable, Internet-based image-sharing system is feasible and surpasses the use of CDs with respect to accessibility of imaging exams while generating similar satisfaction with respect to privacy. PMID:26625706
Direct Visualization of Short Transverse Relaxation Time Component (ViSTa)
Oh, Se-Hong; Bilello, Michel; Schindler, Matthew; Markowitz, Clyde E.; Detre, John A.; Lee, Jongho
2013-01-01
White matter of the brain has been demonstrated to have multiple relaxation components. Among them, the short transverse relaxation time component (T2 < 40 ms; T2* < 25 ms at 3T) has been suggested to originate from myelin water whereas long transverse relaxation time components have been associated with axonal and/or interstitial water. In myelin water imaging, T2 or T2* signal decay is measured to estimate myelin water fraction based on T2 or T2* differences among the water components. This method has been demonstrated to be sensitive to demyelination in the brain but suffers from low SNR and image artifacts originating from ill-conditioned multi-exponential fitting. In this study, a novel approach that selectively acquires short transverse relaxation time signal is proposed. The method utilizes a double inversion RF pair to suppress a range of long T1 signal. This suppression leaves short T2* signal, which has been suggested to have short T1, as the primary source of the image. The experimental results confirms that after suppression of long T1 signals, the image is dominated by short T2* in the range of myelin water, allowing us to directly visualize the short transverse relaxation time component in the brain. Compared to conventional myelin water imaging, this new method of direct visualization of short relaxation time component (ViSTa) provides high quality images. When applied to multiple sclerosis patients, chronic lesions show significantly reduced signal intensity in ViSTa images suggesting sensitivity to demyelination. PMID:23796545
Detection of Melanoma Skin Cancer in Dermoscopy Images
NASA Astrophysics Data System (ADS)
Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui
2017-02-01
Malignant melanoma is the most hazardous type of human skin cancer and its incidence has been rapidly increasing. Early detection of malignant melanoma in dermoscopy images is very important and critical, since its detection in the early stage can be helpful to cure it. Computer Aided Diagnosis systems can be very helpful to facilitate the early detection of cancers for dermatologists. In this paper, we present a novel method for the detection of melanoma skin cancer. To detect the hair and several noises from images, pre-processing step is carried out by applying a bank of directional filters. And therefore, Image inpainting method is implemented to fill in the unknown regions. Fuzzy C-Means and Markov Random Field methods are used to delineate the border of the lesion area in the images. The method was evaluated on a dataset of 200 dermoscopic images, and superior results were produced compared to alternative methods.
Statistical image-domain multimaterial decomposition for dual-energy CT.
Xue, Yi; Ruan, Ruoshui; Hu, Xiuhua; Kuang, Yu; Wang, Jing; Long, Yong; Niu, Tianye
2017-03-01
Dual-energy CT (DECT) enhances tissue characterization because of its basis material decomposition capability. In addition to conventional two-material decomposition from DECT measurements, multimaterial decomposition (MMD) is required in many clinical applications. To solve the ill-posed problem of reconstructing multi-material images from dual-energy measurements, additional constraints are incorporated into the formulation, including volume and mass conservation and the assumptions that there are at most three materials in each pixel and various material types among pixels. The recently proposed flexible image-domain MMD method decomposes pixels sequentially into multiple basis materials using a direct inversion scheme which leads to magnified noise in the material images. In this paper, we propose a statistical image-domain MMD method for DECT to suppress the noise. The proposed method applies penalized weighted least-square (PWLS) reconstruction with a negative log-likelihood term and edge-preserving regularization for each material. The statistical weight is determined by a data-based method accounting for the noise variance of high- and low-energy CT images. We apply the optimization transfer principles to design a serial of pixel-wise separable quadratic surrogates (PWSQS) functions which monotonically decrease the cost function. The separability in each pixel enables the simultaneous update of all pixels. The proposed method is evaluated on a digital phantom, Catphan©600 phantom and three patients (pelvis, head, and thigh). We also implement the direct inversion and low-pass filtration methods for a comparison purpose. Compared with the direct inversion method, the proposed method reduces noise standard deviation (STD) in soft tissue by 95.35% in the digital phantom study, by 88.01% in the Catphan©600 phantom study, by 92.45% in the pelvis patient study, by 60.21% in the head patient study, and by 81.22% in the thigh patient study, respectively. The overall volume fraction accuracy is improved by around 6.85%. Compared with the low-pass filtration method, the root-mean-square percentage error (RMSE(%)) of electron densities in the Catphan©600 phantom is decreased by 20.89%. As modulation transfer function (MTF) magnitude decreased to 50%, the proposed method increases the spatial resolution by an overall factor of 1.64 on the digital phantom, and 2.16 on the Catphan©600 phantom. The overall volume fraction accuracy is increased by 6.15%. We proposed a statistical image-domain MMD method using DECT measurements. The method successfully suppresses the magnified noise while faithfully retaining the quantification accuracy and anatomical structure in the decomposed material images. The proposed method is practical and promising for advanced clinical applications using DECT imaging. © 2017 American Association of Physicists in Medicine.
Groupwise registration of MR brain images with tumors.
Tang, Zhenyu; Wu, Yihong; Fan, Yong
2017-08-04
A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p = 7.02 × 10 -9 ).
Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz
2016-01-01
The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692
Abt, Nicholas B.; Lehar, Mohamed; Guajardo, Carolina Trevino; Penninger, Richard T.; Ward, Bryan K.; Pearl, Monica S.; Carey, John P.
2016-01-01
Hypothesis Whether the RWM is permeable to iodine-based contrast agents (IBCA) is unknown; therefore, our goal was to determine if IBCAs could diffuse through the RWM using CT volume acquisition imaging. Introduction Imaging of hydrops in the living human ear has attracted recent interest. Intratympanic (IT) injection has shown gadolinium's ability to diffuse through the round window membrane (RWM), enhancing the perilymphatic space. Methods Four unfixed human cadaver temporal bones underwent intratympanic IBCA injection using three sequentially studied methods. The first method was direct IT injection. The second method used direct RWM visualization via tympanomeatal flap for IBCA-soaked absorbable gelatin pledget placement. In the third method, the middle ear was filled with contrast after flap elevation. Volume acquisition CT images were obtained immediately post-exposure, and at 1, 6, and 24 hour intervals. Post-processing was accomplished using color ramping and subtraction imaging. Results Following the third method, positive RWM and perilymphatic enhancement were seen with endolymph sparing. Gray scale and color ramp multiplanar reconstructions displayed increased signal within the cochlea compared to pre-contrast imaging. The cochlea was measured for attenuation differences compared to pure water, revealing a pre-injection average of −1,103 HU and a post-injection average of 338 HU. Subtraction imaging shows enhancement remaining within the cochlear space, Eustachian tube, middle ear epithelial lining, and mastoid. Conclusions Iohexol iodine contrast is able to diffuse across the RWM. Volume acquisition CT imaging was able to detect perilymphatic enhancement at 0.5mm slice thickness. The clinical application of IBCA IT injection appears promising but requires further safety studies. PMID:26859543
NASA Astrophysics Data System (ADS)
Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Zakharov, Valery P.; Khramov, Alexander G.
2016-10-01
In this paper, we propose a report about our examining of the validity of OCT in identifying changes using a skin cancer texture analysis compiled from Haralick texture features, fractal dimension, Markov random field method and the complex directional features from different tissues. Described features have been used to detect specific spatial characteristics, which can differentiate healthy tissue from diverse skin cancers in cross-section OCT images (B- and/or C-scans). In this work, we used an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images. The Haralick texture features as contrast, correlation, energy, and homogeneity have been calculated in various directions. A box-counting method is performed to evaluate fractal dimension of skin probes. Markov random field have been used for the quality enhancing of the classifying. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. Our results demonstrate that these texture features may present helpful information to discriminate tumor from healthy tissue. The experimental data set contains 488 OCT-images with normal skin and tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevus. All images were acquired from our laboratory SD-OCT setup based on broadband light source, delivering an output power of 20 mW at the central wavelength of 840 nm with a bandwidth of 25 nm. We obtained sensitivity about 97% and specificity about 73% for a task of discrimination between MM and Nevus.
TU-G-BRA-02: Can We Extract Lung Function Directly From 4D-CT Without Deformable Image Registration?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kipritidis, J; Woodruff, H; Counter, W
Purpose: Dynamic CT ventilation imaging (CT-VI) visualizes air volume changes in the lung by evaluating breathing-induced lung motion using deformable image registration (DIR). Dynamic CT-VI could enable functionally adaptive lung cancer radiation therapy, but its sensitivity to DIR parameters poses challenges for validation. We hypothesize that a direct metric using CT parameters derived from Hounsfield units (HU) alone can provide similar ventilation images without DIR. We compare the accuracy of Direct and Dynamic CT-VIs versus positron emission tomography (PET) images of inhaled {sup 68}Ga-labelled nanoparticles (‘Galligas’). Methods: 25 patients with lung cancer underwent Galligas 4D-PET/CT scans prior to radiation therapy.more » For each patient we produced three CT- VIs. (i) Our novel method, Direct CT-VI, models blood-gas exchange as the product of air and tissue density at each lung voxel based on time-averaged 4D-CT HU values. Dynamic CT-VIs were produced by evaluating: (ii) regional HU changes, and (iii) regional volume changes between the exhale and inhale 4D-CT phase images using a validated B-spline DIR method. We assessed the accuracy of each CT-VI by computing the voxel-wise Spearman correlation with free-breathing Galligas PET, and also performed a visual analysis. Results: Surprisingly, Direct CT-VIs exhibited better global correlation with Galligas PET than either of the dynamic CT-VIs. The (mean ± SD) correlations were (0.55 ± 0.16), (0.41 ± 0.22) and (0.29 ± 0.27) for Direct, Dynamic HU-based and Dynamic volume-based CT-VIs respectively. Visual comparison of Direct CT-VI to PET demonstrated similarity for emphysema defects and ventral-to-dorsal gradients, but inability to identify decreased ventilation distal to tumor-obstruction. Conclusion: Our data supports the hypothesis that Direct CT-VIs are as accurate as Dynamic CT-VIs in terms of global correlation with Galligas PET. Visual analysis, however, demonstrated that different CT-VI algorithms might have varying accuracy depending on the underlying cause of ventilation abnormality. This research was supported by a National Health and Medical Research Council (NHMRC) Australia Fellowship, an Cancer Institute New South Wales Early Career Fellowship 13-ECF-1/15 and NHMRC scholarship APP1038399. No commercial funding was received for this work.« less
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2018-02-13
In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.
Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.
Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2017-02-01
Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier B.V. All rights reserved.
Quality measures in applications of image restoration.
Kriete, A; Naim, M; Schafer, L
2001-01-01
We describe a new method for the estimation of image quality in image restoration applications. We demonstrate this technique on a simulated data set of fluorescent beads, in comparison with restoration by three different deconvolution methods. Both the number of iterations and a regularisation factor are varied to enforce changes in the resulting image quality. First, the data sets are directly compared by an accuracy measure. These values serve to validate the image quality descriptor, which is developed on the basis of optical information theory. This most general measure takes into account the spectral energies and the noise, weighted in a logarithmic fashion. It is demonstrated that this method is particularly helpful as a user-oriented method to control the output of iterative image restorations and to eliminate the guesswork in choosing a suitable number of iterations.
Focus detection by shearing interference of vortex beams for non-imaging systems.
Li, Xiongfeng; Zhan, Shichao; Liang, Yiyong
2018-02-10
In focus detection of non-imaging systems, the common image-based methods are not available. Also, interference techniques are seldom used because only the degree with hardly any direction of defocus can be derived from the fringe spacing. In this paper, we propose a vortex-beam-based shearing interference system to do focus detection for a focused laser direct-writing system, where a vortex beam is already involved. Both simulated and experimental results show that fork-like features are added in the interference patterns due to the existence of an optical vortex, which makes it possible to distinguish the degree and direction of defocus simultaneously. The theoretical fringe spacing and resolution of this method are derived. A resolution of 0.79 μm can be achieved under the experimental combination of parameters, and it can be further improved with the help of the image processing algorithm and closed-loop controlling in the future. Finally, the influence of incomplete collimation and the wedge angle of the shear plate is discussed. This focus detection approach is extremely appropriate for those non-imaging systems containing one or more focused vortex beams.
Focal plane based wavefront sensing with random DM probes
NASA Astrophysics Data System (ADS)
Pluzhnik, Eugene; Sirbu, Dan; Belikov, Ruslan; Bendek, Eduardo; Dudinov, Vladimir N.
2017-09-01
An internal coronagraph with an adaptive optical system for wavefront control is being considered for direct imaging of exoplanets with upcoming space missions and concepts, including WFIRST, HabEx, LUVOIR, EXCEDE and ACESat. The main technical challenge associated with direct imaging of exoplanets is to control of both diffracted and scattered light from the star so that even a dim planetary companion can be imaged. For a deformable mirror (DM) to create a dark hole with 10-10 contrast in the image plane, wavefront errors must be accurately measured on the science focal plane detector to ensure a common optical path. We present here a method that uses a set of random phase probes applied to the DM to obtain a high accuracy wavefront estimate even for a dynamically changing optical system. The presented numerical simulations and experimental results show low noise sensitivity, high reliability, and robustness of the proposed approach. The method does not use any additional optics or complex calibration procedures and can be used during the calibration stage of any direct imaging mission. It can also be used in any optical experiment that uses a DM as an active optical element in the layout.
Direct microscopic image and measurement of the atomization process of a port fuel injector
NASA Astrophysics Data System (ADS)
Esmail, Mohamed; Kawahara, Nobuyuki; Tomita, Eiji; Sumida, Mamoru
2010-07-01
The main objective of this study is to observe and investigate the phenomena of atomization, i.e. the fuel break-up process very close to the nozzle exit of a practical port fuel injector (PFI). In order to achieve this objective, direct microscopic images of the atomization process were obtained using an ultra-high-speed video camera that could record 102 frames at rates of up to 1 Mfps, coupled with a long-distance microscope and Barlow lens. The experiments were carried out using a PFI in a closed chamber at atmospheric pressure. Time-series images of the spray behaviour were obtained with a high temporal resolution using backlighting. The direct microscopic images of a liquid column break-up were compared with experimental results from laser-induced exciplex fluorescence (LIEF), and the wavelength obtained from the experimental results compared with that predicated from the Kelvin-Helmholtz break-up model. The droplet size diameters from a ligament break-up were compared with results predicated from Weber's analysis. Furthermore, experimental results of the mean droplet diameter from a direct microscopic image were compared with the results obtained from phase Doppler anemometry (PDA) experimental results. Three conclusions were obtained from this study. The atomization processes and detailed characterizations of the break-up of a liquid column were identified; the direct microscopic image results were in good agreement with the results obtained from LIEF, experimental results of the wavelength were in good agreement with those from the Kelvin-Helmholtz break-up model. The break-up process of liquid ligaments into droplets was investigated, and Weber's analysis of the predicated droplet diameter from ligament break-up was found to be applicable only at larger wavelengths. Finally, the direct microscopic image method and PDA method give qualitatively similar trends for droplet size distribution and quantitatively similar values of Sauter mean diameter.
Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea
NASA Astrophysics Data System (ADS)
Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei
2018-02-01
Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.
Filter Design and Performance Evaluation for Fingerprint Image Segmentation
Thai, Duy Hoang; Huckemann, Stephan; Gottschlich, Carsten
2016-01-01
Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: ‘true’ foreground can be labeled as background and features like minutiae can be lost, or conversely ‘true’ background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB) segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB) filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available. PMID:27171150
3D superwide-angle one-way propagator and its application in seismic modeling and imaging
NASA Astrophysics Data System (ADS)
Jia, Xiaofeng; Jiang, Yunong; Wu, Ru-Shan
2018-07-01
Traditional one-way wave-equation based propagators have been widely used in past decades. Comparing to two-way propagators, one-way methods have higher efficiency and lower memory demands. These two features are especially important in solving large-scale 3D problems. However, regular one-way propagators cannot simulate waves that propagate in large angles within 90° because of their inherent wide angle limitation. Traditional one-way can only propagate along the determined direction (e.g., z-direction), so simulation of turning waves is beyond the ability of one-way methods. We develop 3D superwide-angle one-way propagator to overcome angle limitation and to simulate turning waves with superwide-angle propagation angle (>90°) for modeling and imaging complex geological structures. Wavefields propagating along vertical and horizontal directions are combined using typical stacking scheme. A weight function related to the propagation angle is used for combining and updating wavefields in each propagating step. In the implementation, we use graphics processing units (GPU) to accelerate the process. Typical workflow is designed to exploit the advantages of GPU architecture. Numerical examples show that the method achieves higher accuracy in modeling and imaging steep structures than regular one-way propagators. Actually, superwide-angle one-way propagator can be applied based on any one-way method to improve the effects of seismic modeling and imaging.
Engel, Aaron J; Bashford, Gregory R
2015-08-01
Ultrasound based shear wave elastography (SWE) is a technique used for non-invasive characterization and imaging of soft tissue mechanical properties. Robust estimation of shear wave propagation speed is essential for imaging of soft tissue mechanical properties. In this study we propose to estimate shear wave speed by inversion of the first-order wave equation following directional filtering. This approach relies on estimation of first-order derivatives which allows for accurate estimations using smaller smoothing filters than when estimating second-order derivatives. The performance was compared to three current methods used to estimate shear wave propagation speed: direct inversion of the wave equation (DIWE), time-to-peak (TTP) and cross-correlation (CC). The shear wave speed of three homogeneous phantoms of different elastic moduli (gelatin by weight of 5%, 7%, and 9%) were measured with each method. The proposed method was shown to produce shear speed estimates comparable to the conventional methods (standard deviation of measurements being 0.13 m/s, 0.05 m/s, and 0.12 m/s), but with simpler processing and usually less time (by a factor of 1, 13, and 20 for DIWE, CC, and TTP respectively). The proposed method was able to produce a 2-D speed estimate from a single direction of wave propagation in about four seconds using an off-the-shelf PC, showing the feasibility of performing real-time or near real-time elasticity imaging with dedicated hardware.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Wu, Wenchuan; Fang, Sheng; Guo, Hua
2014-06-01
Aiming at motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging (MRI), we proposed a joint correction method in this paper to correct the two kinds of artifacts simultaneously without additional acquisition of navigation data and field map. We utilized the proposed method using multi-shot variable density spiral sequence to acquire MRI data and used auto-focusing technique for image deblurring. We also used direct method or iterative method to correct motion induced phase errors in the process of deblurring. In vivo MRI experiments demonstrated that the proposed method could effectively suppress motion artifacts and off-resonance artifacts and achieve images with fine structures. In addition, the scan time was not increased in applying the proposed method.
Edge detection based on computational ghost imaging with structured illuminations
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin
2018-03-01
Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.
A preliminary study for fully automated quantification of psoriasis severity using image mapping
NASA Astrophysics Data System (ADS)
Mukai, Kazuhiro; Iyatomi, Hitoshi
2014-03-01
Psoriasis is a common chronic skin disease and it detracts patients' QoL seriously. Since there is no known permanent cure so far, controlling appropriate disease condition is necessary and therefore quantification of its severity is important. In clinical, psoriasis area and severity index (PASI) is commonly used for abovementioned purpose, however it is often subjective and troublesome. A fully automatic computer-assisted area and severity index (CASI) was proposed to make an objective quantification of skin disease. It investigates the size and density of erythema based on digital image analysis, however it does not consider various inadequate effects caused by different geometrical conditions under clinical follow-up (i.e. variability in direction and distance between camera and patient). In this study, we proposed an image alignment method for clinical images and investigated to quantify the severity of psoriasis under clinical follow-up combined with the idea of CASI. The proposed method finds geometrical same points in patient's body (ROI) between images with Scale Invariant Feature Transform (SIFT) and performs the Affine transform to map the pixel value to the other. In this study, clinical images from 7 patients with psoriasis lesions on their trunk under clinical follow-up were used. In each series, our image alignment algorithm align images to the geometry of their first image. Our proposed method aligned images appropriately on visual assessment and confirmed that psoriasis areas were properly extracted using the approach of CASI. Although we cannot evaluate PASI and CASI directly due to their different definition of ROI, we confirmed that there is a large correlation between those scores with our image quantification method.
WE-AB-303-08: Direct Lung Tumor Tracking Using Short Imaging Arcs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, C; Huang, C; Keall, P
2015-06-15
Purpose: Most current tumor tracking technologies rely on implanted markers, which suffer from potential toxicity of marker placement and mis-targeting due to marker migration. Several markerless tracking methods have been proposed: these are either indirect methods or have difficulties tracking lung tumors in most clinical cases due to overlapping anatomies in 2D projection images. We propose a direct lung tumor tracking algorithm robust to overlapping anatomies using short imaging arcs. Methods: The proposed algorithm tracks the tumor based on kV projections acquired within the latest six-degree imaging arc. To account for respiratory motion, an external motion surrogate is used tomore » select projections of the same phase within the latest arc. For each arc, the pre-treatment 4D cone-beam CT (CBCT) with tumor contours are used to estimate and remove the contribution to the integral attenuation from surrounding anatomies. The position of the tumor model extracted from 4D CBCT of the same phase is then optimized to match the processed projections using the conjugate gradient method. The algorithm was retrospectively validated on two kV scans of a lung cancer patient with implanted fiducial markers. This patient was selected as the tumor is attached to the mediastinum, representing a challenging case for markerless tracking methods. The tracking results were converted to expected marker positions and compared with marker trajectories obtained via direct marker segmentation (ground truth). Results: The root-mean-squared-errors of tracking were 0.8 mm and 0.9 mm in the superior-inferior direction for the two scans. Tracking error was found to be below 2 and 3 mm for 90% and 98% of the time, respectively. Conclusions: A direct lung tumor tracking algorithm robust to overlapping anatomies was proposed and validated on two scans of a lung cancer patient. Sub-millimeter tracking accuracy was observed, indicating the potential of this algorithm for real-time guidance applications.« less
The review on infrared image restoration techniques
NASA Astrophysics Data System (ADS)
Li, Sijian; Fan, Xiang; Zhu, Bin Cheng; Zheng, Dong
2016-11-01
The goal of infrared image restoration is to reconstruct an original scene from a degraded observation. The restoration process in the application of infrared wavelengths, however, still has numerous research possibilities. In order to give people a comprehensive knowledge of infrared image restoration, the degradation factors divided into two major categories of noise and blur. Many kinds of infrared image restoration method were overviewed. Mathematical background and theoretical basis of infrared image restoration technology, and the limitations or insufficiency of existing methods were discussed. After the survey, the direction and prospects of infrared image restoration technology for the future development were forecast and put forward.
Dosimetry and image quality assessment in a direct radiography system
Oliveira, Bruno Beraldo; de Oliveira, Marcio Alves; Paixão, Lucas; Teixeira, Maria Helena Araújo; Nogueira, Maria do Socorro
2014-01-01
Objective To evaluate the mean glandular dose with a solid state detector and the image quality in a direct radiography system, utilizing phantoms. Materials and Methods Irradiations were performed with automatic exposure control and polymethyl methacrylate slabs with different thicknesses to calculate glandular dose values. The image quality was evaluated by means of the structures visualized on the images of the phantoms. Results Considering the uncertainty of the measurements, the mean glandular dose results are in agreement with the values provided by the equipment and with internationally adopted reference levels. Results obtained from images of the phantoms were in agreement with the reference values. Conclusion The present study contributes to verify the equipment conformity as regards dose values and image quality. PMID:25741119
Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro
2010-07-08
A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.
Mid-callosal plane determination using preferred directions from diffusion tensor images
NASA Astrophysics Data System (ADS)
Costa, André L.; Rittner, Letícia; Lotufo, Roberto A.; Appenzeller, Simone
2015-03-01
The corpus callosum is the major brain structure responsible for inter{hemispheric communication between neurons. Many studies seek to relate corpus callosum attributes to patient characteristics, cerebral diseases and psychological disorders. Most of those studies rely on 2D analysis of the corpus callosum in the mid-sagittal plane. However, it is common to find conflicting results among studies, once many ignore methodological issues and define the mid-sagittal plane based on precary or invalid criteria with respect to the corpus callosum. In this work we propose a novel method to determine the mid-callosal plane using the corpus callosum internal preferred diffusion directions obtained from diffusion tensor images. This plane is analogous to the mid-sagittal plane, but intended to serve exclusively as the corpus callosum reference. Our method elucidates the great potential the directional information of the corpus callosum fibers have to indicate its own referential. Results from experiments with five image pairs from distinct subjects, obtained under the same conditions, demonstrate the method effectiveness to find the corpus callosum symmetric axis relative to the axial plane.
Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories
NASA Astrophysics Data System (ADS)
AlMomani, Abd AlRahman R.; Bollt, Erik
2018-06-01
Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.
Keyhole imaging method for dynamic objects behind the occlusion area
NASA Astrophysics Data System (ADS)
Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong
2018-01-01
A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .
Directional MTF measurement using sphere phantoms for a digital breast tomosynthesis system
NASA Astrophysics Data System (ADS)
Lee, Changwoo; Baek, Jongduk
2015-03-01
The digital breast tomosynthesis (DBT) has been widely used as a diagnosis imaging modality of breast cancer because of potential for structure noise reduction, better detectability, and less breast compression. Since 3D modulation transfer function (MTF) is one of the quantitative metrics to assess the spatial resolution of medical imaging systems, it is very important to measure 3D MTF of the DBT system to evaluate the resolution performance. In order to do that, Samei et al. used sphere phantoms and applied Thornton's method to the DBT system. However, due to the limitation of Thornton's method, the low frequency drop, caused by the limited data acquisition angle and reconstruction filters, was not measured correctly. To overcome this limitation, we propose a Richardson-Lucy (RL) deconvolution based estimation method to measure the directional MTF. We reconstructed point and sphere objects using FDK algorithm within a 40⁰ data acquisition angle. The ideal 3D MTF is obtained by taking Fourier transform of the reconstructed point object, and three directions (i.e., fx-direction, fy-direction, and fxy-direction) of the ideal 3D MTF are used as a reference. To estimate the directional MTF, the plane integrals of the reconstructed and ideal sphere object were calculated and used to estimate the directional PSF using RL deconvolution technique. Finally, the directional MTF was calculated by taking Fourier transform of the estimated PSF. Compared to the previous method, the proposed method showed a good agreement with the ideal directional MTF, especially at low frequency regions.
Imaging of all three coronary arteries by transthoracic echocardiography. an illustrated guide
Krzanowski, Marek; Bodzoń, Wojciech; Dimitrow, Paweł Petkow
2003-01-01
Background Improvements in ultrasound technology has enabled direct, transthoracic visualization of long portions of coronary arteries : the left anterior descending (LAD), circumflex (Cx) and right coronary artery (RCA). Transthoracic measurements of coronary flow velocity were proved to be highly reproducible and correlated with invasive measurements. While clinical applications of transthoracic echocardiography (TTE) of principal coronary arteries are still very limited they will likely grow. The echocardiographers may therefore be interested to know the ultrasonic views, technique of examination and be aware where to look for coronary arteries and how to optimize the images. Methods A step-by-step approach to direct, transthoracic visualization of the LAD, Cx and RCA is presented. The technique of examination is discussed, correlations with basic coronary angiography views and heart anatomy are shown and extensively illustrated with photographs and movie-pictures. Hints concerning optimization of ultrasound images are presented and artifacts of imaging are discussed. Conclusions Direct, transthoracic examination of the LAD, Cx and RCA in adults is possible and may become a useful adjunct to other methods of coronary artery examination but studies are needed to establish its role. PMID:14622441
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
The pre-image problem in kernel methods.
Kwok, James Tin-yau; Tsang, Ivor Wai-hung
2004-11-01
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.
A method of extracting speed-dependent vector correlations from 2 + 1 REMPI ion images.
Wei, Wei; Wallace, Colin J; Grubb, Michael P; North, Simon W
2017-07-07
We present analytical expressions for extracting Dixon's bipolar moments in the semi-classical limit from experimental anisotropy parameters of sliced or reconstructed non-sliced images. The current method focuses on images generated by 2 + 1 REMPI (Resonance Enhanced Multi-photon Ionization) and is a necessary extension of our previously published 1 + 1 REMPI equations. Two approaches for applying the new equations, direct inversion and forward convolution, are presented. As demonstration of the new method, bipolar moments were extracted from images of carbonyl sulfide (OCS) photodissociation at 230 nm and NO 2 photodissociation at 355 nm, and the results are consistent with previous publications.
TH-CD-207B-03: How to Quantify Temporal Resolution in X-Ray MDCT Imaging?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budde, A; GE Healthcare Technologies, Madison, WI; Li, Y
Purpose: In modern CT scanners, a quantitative metric to assess temporal response, namely, to quantify the temporal resolution (TR), remains elusive. Rough surrogate metrics, such as half of the gantry rotation time for single source CT, a quarter of the gantry rotation time for dual source CT, or measurements of motion artifact’s size, shape, or intensity have previously been used. In this work, a rigorous framework which quantifies TR and a practical measurement method are developed. Methods: A motion phantom was simulated which consisted of a single rod that is in motion except during a static period at the temporalmore » center of the scan, termed the TR window. If the image of the motion scan has negligible motion artifacts compared to an image from a totally static scan, then the system has a TR no worse than the TR window used. By repeating this comparison with varying TR windows, the TR of the system can be accurately determined. Motion artifacts were also visually assessed and the TR was measured across varying rod motion speeds, directions, and locations. Noiseless fan beam acquisitions were simulated and images were reconstructed with a short-scan image reconstruction algorithm. Results: The size, shape, and intensity of motion artifacts varied when the rod speed, direction, or location changed. TR measured using the proposed method, however, was consistent across rod speeds, directions, and locations. Conclusion: Since motion artifacts vary depending upon the motion speed, direction, and location, they are not suitable for measuring TR. In this work, a CT system with a specified TR is defined as having the ability to produce a static image with negligible motion artifacts, no matter what motion occurs outside of a static window of width TR. This framework allows for practical measurement of temporal resolution in clinical CT imaging systems. Funding support: GE Healthcare; Conflict of Interest: Employee, GE Healthcare.« less
Imaging quality analysis of multi-channel scanning radiometer
NASA Astrophysics Data System (ADS)
Fan, Hong; Xu, Wujun; Wang, Chengliang
2008-03-01
Multi-channel scanning radiometer, on boarding FY-2 geostationary meteorological satellite, plays a key role in remote sensing because of its wide field of view and continuous multi-spectral images acquirements. It is significant to evaluate image quality after performance parameters of the imaging system are validated. Several methods of evaluating imaging quality are discussed. Of these methods, the most fundamental is the MTF. The MTF of photoelectric scanning remote instrument, in the scanning direction, is the multiplication of optics transfer function (OTF), detector transfer function (DTF) and electronics transfer function (ETF). For image motion compensation, moving speed of scanning mirror should be considered. The optical MTF measurement is performed in both the EAST/WEST and NORTH/SOUTH direction, whose values are used for alignment purposes and are used to determine the general health of the instrument during integration and testing. Imaging systems cannot perfectly reproduce what they see and end up "blurring" the image. Many parts of the imaging system can cause blurring. Among these are the optical elements, the sampling of the detector itself, post-processing, or the earth's atmosphere for systems that image through it. Through theory calculation and actual measurement, it is proved that DTF and ETF are the main factors of system MTF and the imaging quality can satisfy the requirement of instrument design.
Scannerless loss modulated flash color range imaging
Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM
2008-09-02
Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.
Scannerless loss modulated flash color range imaging
Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM
2009-02-24
Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.
A novel weighted-direction color interpolation
NASA Astrophysics Data System (ADS)
Tao, Jin-you; Yang, Jianfeng; Xue, Bin; Liang, Xiaofen; Qi, Yong-hong; Wang, Feng
2013-08-01
A digital camera capture images by covering the sensor surface with a color filter array (CFA), only get a color sample at pixel location. Demosaicking is a process by estimating the missing color components of each pixel to get a full resolution image. In this paper, a new algorithm based on edge adaptive and different weighting factors is proposed. Our method can effectively suppress undesirable artifacts. Experimental results based on Kodak images show that the proposed algorithm obtain higher quality images compared to other methods in numerical and visual aspects.
ERIC Educational Resources Information Center
Thomas, Michael S. C.; Purser, Harry R. M.; Tomlinson, Simon; Mareschal, Denis
2012-01-01
This article presents an investigation of the relationship between lesioning and neuroimaging methods of assessing functional specialisation, using synthetic brain imaging (SBI) and lesioning of a connectionist network of past-tense formation. The model comprised two processing "routes": one was a direct route between layers of input and output…
Robust extraction of the aorta and pulmonary artery from 3D MDCT image data
NASA Astrophysics Data System (ADS)
Taeprasartsit, Pinyo; Higgins, William E.
2010-03-01
Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
Subramanian, Sankaran; Koscielniak, Janusz W.; Devasahayam, Nallathamby; Pursley, Randall H.; Pohida, Thomas J.; Krishna, Murali C.
2007-01-01
Rapid field scan on the order of T/s using high frequency sinusoidal or triangular sweep fields superimposed on the main Zeeman field, was used for direct detection of signals without low-frequency field modulation. Simultaneous application of space-encoding rotating field gradients have been employed to perform fast CW EPR imaging using direct detection that could, in principle, approach the speed of pulsed FT EPR imaging. The method takes advantage of the well-known rapid-scan strategy in CW NMR and EPR that allows arbitrarily fast field sweep and the simultaneous application of spinning gradients that allows fast spatial encoding. This leads to fast functional EPR imaging and, depending on the spin concentration, spectrometer sensitivity and detection band width, can provide improved temporal resolution that is important to interrogate dynamics of spin perfusion, pharmacokinetics, spectral spatial imaging, dynamic oxymetry, etc. PMID:17350865
Wu, Zhe; Bilgic, Berkin; He, Hongjian; Tong, Qiqi; Sun, Yi; Du, Yiping; Setsompop, Kawin; Zhong, Jianhui
2018-09-01
This study introduces a highly accelerated whole-brain direct visualization of short transverse relaxation time component (ViSTa) imaging using a wave controlled aliasing in parallel imaging (CAIPI) technique, for acquisition within a clinically acceptable scan time, with the preservation of high image quality and sufficient spatial resolution, and reduced residual point spread function artifacts. Double inversion RF pulses were applied to preserve the signal from short T 1 components for directly extracting myelin water signal in ViSTa imaging. A 2D simultaneous multislice and a 3D acquisition of ViSTa images incorporating wave-encoding were used for data acquisition. Improvements brought by a zero-padding method in wave-CAIPI reconstruction were also investigated. The zero-padding method in wave-CAIPI reconstruction reduced the root-mean-square errors between the wave-encoded and Cartesian gradient echoes for all wave gradient configurations in simulation, and reduced the side-main lobe intensity ratio from 34.5 to 16% in the thin-slab in vivo ViSTa images. In a 4 × acceleration simultaneous-multislice scenario, wave-CAIPI ViSTa achieved negligible g-factors (g mean /g max = 1.03/1.10), while retaining minimal interslice artifacts. An 8 × accelerated acquisition of 3D wave-CAIPI ViSTa imaging covering the whole brain with 1.1 × 1.1 × 3 mm 3 voxel size was achieved within 15 minutes, and only incurred a small g-factor penalty (g mean /g max = 1.05/1.16). Whole-brain ViSTa images were obtained within 15 minutes with negligible g-factor penalty by using wave-CAIPI acquisition and zero-padding reconstruction. The proposed zero-padding method was shown to be effective in reducing residual point spread function for wave-encoded images, particularly for ViSTa. © 2018 International Society for Magnetic Resonance in Medicine.
Demonstration of Sparse Signal Reconstruction for Radar Imaging of Ice Sheets
NASA Astrophysics Data System (ADS)
Heister, Anton; Scheiber, Rolf
2017-04-01
Conventional processing of ice-sounder data produces 2-D images of the ice sheet and bed, where the two dimensions are along-track and depth, while the across-track direction is fixed to nadir. The 2-D images contain information about the topography and radar reflectivity of the ice sheet's surface, bed, and internal layers in the along-track direction. Having multiple antenna phase centers in the across-track direction enables the production of 3-D images of the ice sheet and bed. Compared to conventional 2-D images, these contain additional information about the surface and bed topography, and orientation of the internal layers over a swath in the across-track direction. We apply a 3-D SAR tomographic ice-sounding method based on sparse signal reconstruction [1] to the data collected by Center for Remote Sensing of Ice Sheets (CReSIS) in 2008 in Greenland [2] using their multichannel coherent radar depth sounder (MCoRDS). The MCoRDS data have 16 effective phase centers which allows us to better understand the performance of the method. Lastly we offer sparsity improvement by including wavelet dictionaries into the reconstruction.The results show improved scene feature resolvability in across-track direction compared to MVDR beamformer. References: [1] A. Heister, R. Scheiber, "First Analysis of Sparse Signal Reconstruction for Radar Imaging of Ice Sheets". In: Proceedings of EUSAR, pp. 788-791, June 2016. [2] X. Wu, K. C. Jezek, E. Rodriguez, S. Gogineni, F. Rodriguez-Morales, and A. Freeman, "Ice sheet bed mapping with airborne SAR tomography". IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 10 Part 1, pp. 3791-3802, 2011.
2007-02-28
Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies
Building Facade Modeling Under Line Feature Constraint Based on Close-Range Images
NASA Astrophysics Data System (ADS)
Liang, Y.; Sheng, Y. H.
2018-04-01
To solve existing problems in modeling facade of building merely with point feature based on close-range images , a new method for modeling building facade under line feature constraint is proposed in this paper. Firstly, Camera parameters and sparse spatial point clouds data were restored using the SFM , and 3D dense point clouds were generated with MVS; Secondly, the line features were detected based on the gradient direction , those detected line features were fit considering directions and lengths , then line features were matched under multiple types of constraints and extracted from multi-image sequence. At last, final facade mesh of a building was triangulated with point cloud and line features. The experiment shows that this method can effectively reconstruct the geometric facade of buildings using the advantages of combining point and line features of the close - range image sequence, especially in restoring the contour information of the facade of buildings.
USDA-ARS?s Scientific Manuscript database
A high-throughput Raman chemical imaging method was developed for direct inspection of benzoyl peroxide (BPO) mixed in wheat flour. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source in a push-broom Raman imaging system. Hyperspectral Raman images were collecte...
Kelley, Laura C.; Wang, Zheng; Hagedorn, Elliott J.; Wang, Lin; Shen, Wanqing; Lei, Shijun; Johnson, Sam A.; Sherwood, David R.
2018-01-01
Cell invasion through basement membrane (BM) barriers is crucial during development, leukocyte trafficking, and for the spread of cancer. Despite its importance in normal and diseased states, the mechanisms that direct invasion are poorly understood, in large part because of the inability to visualize dynamic cell-basement membrane interactions in vivo. This protocol describes multi-channel time-lapse confocal imaging of anchor cell invasion in live C. elegans. Methods presented include outline slide preparation and worm growth synchronization (15 min), mounting (20 min), image acquisition (20-180 min), image processing (20 min), and quantitative analysis (variable timing). Images acquired enable direct measurement of invasive dynamics including invadopodia formation, cell membrane protrusions, and BM removal. This protocol can be combined with genetic analysis, molecular activity probes, and optogenetic approaches to uncover molecular mechanisms underlying cell invasion. These methods can also be readily adapted for real-time analysis of cell migration, basement membrane turnover, and cell membrane dynamics by any worm laboratory. PMID:28880279
Measuring digit lengths with 3D digital stereophotogrammetry: A comparison across methods.
Gremba, Allison; Weinberg, Seth M
2018-05-09
We compared digital 3D stereophotogrammetry to more traditional measurement methods (direct anthropometry and 2D scanning) to capture digit lengths and ratios. The length of the second and fourth digits was measured by each method and the second-to-fourth ratio was calculated. For each digit measurement, intraobserver agreement was calculated for each of the three collection methods. Further, measurements from the three methods were compared directly to one another. Agreement statistics included the intraclass correlation coefficient (ICC) and technical error of measurement (TEM). Intraobserver agreement statistics for the digit length measurements were high for all three methods; ICC values exceeded 0.97 and TEM values were below 1 mm. For digit ratio, intraobserver agreement was also acceptable for all methods, with direct anthropometry exhibiting lower agreement (ICC = 0.87) compared to indirect methods. For the comparison across methods, the overall agreement was high for digit length measurements (ICC values ranging from 0.93 to 0.98; TEM values below 2 mm). For digit ratios, high agreement was observed between the two indirect methods (ICC = 0.93), whereas indirect methods showed lower agreement when compared to direct anthropometry (ICC < 0.75). Digit measurements and derived ratios from 3D stereophotogrammetry showed high intraobserver agreement (similar to more traditional methods) suggesting that landmarks could be placed reliably on 3D hand surface images. While digit length measurements were found to be comparable across all three methods, ratios derived from direct anthropometry tended to be higher than those calculated indirectly from 2D or 3D images. © 2018 Wiley Periodicals, Inc.
Along-track calibration of SWIR push-broom hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Jemec, Jurij; Pernuš, Franjo; Likar, Boštjan; Bürmen, Miran
2016-05-01
Push-broom hyperspectral imaging systems are increasingly used for various medical, agricultural and military purposes. The acquired images contain spectral information in every pixel of the imaged scene collecting additional information about the imaged scene compared to the classical RGB color imaging. Due to the misalignment and imperfections in the optical components comprising the push-broom hyperspectral imaging system, variable spectral and spatial misalignments and blur are present in the acquired images. To capture these distortions, a spatially and spectrally variant response function must be identified at each spatial and spectral position. In this study, we propose a procedure to characterize the variant response function of Short-Wavelength Infrared (SWIR) push-broom hyperspectral imaging systems in the across-track and along-track direction and remove its effect from the acquired images. A custom laser-machined spatial calibration targets are used for the characterization. The spatial and spectral variability of the response function in the across-track and along-track direction is modeled by a parametrized basis function. Finally, the characterization results are used to restore the distorted hyperspectral images in the across-track and along-track direction by a Richardson-Lucy deconvolution-based algorithm. The proposed calibration method in the across-track and along-track direction is thoroughly evaluated on images of targets with well-defined geometric properties. The results suggest that the proposed procedure is well suited for fast and accurate spatial calibration of push-broom hyperspectral imaging systems.
CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition
Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe
2013-01-01
Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764
Li, Qinwei; Xiao, Xia; Wang, Liang; Song, Hang; Kono, Hayato; Liu, Peifang; Lu, Hong; Kikkawa, Takamaro
2015-10-01
A direct extraction method of tumor response based on ensemble empirical mode decomposition (EEMD) is proposed for early breast cancer detection by ultra-wide band (UWB) microwave imaging. With this approach, the image reconstruction for the tumor detection can be realized with only extracted signals from as-detected waveforms. The calibration process executed in the previous research for obtaining reference waveforms which stand for signals detected from the tumor-free model is not required. The correctness of the method is testified by successfully detecting a 4 mm tumor located inside the glandular region in one breast model and by the model located at the interface between the gland and the fat, respectively. The reliability of the method is checked by distinguishing a tumor buried in the glandular tissue whose dielectric constant is 35. The feasibility of the method is confirmed by showing the correct tumor information in both simulation results and experimental results for the realistic 3-D printed breast phantom.
Patch-based image reconstruction for PET using prior-image derived dictionaries
NASA Astrophysics Data System (ADS)
Tahaei, Marzieh S.; Reader, Andrew J.
2016-09-01
In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.
Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang
2017-01-01
Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, “eddy_correct” and the combination of “eddy” and “topup” in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non–diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non–diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme. PMID:25405472
Yamada, Haruyasu; Abe, Osamu; Shizukuishi, Takashi; Kikuta, Junko; Shinozaki, Takahiro; Dezawa, Ko; Nagano, Akira; Matsuda, Masayuki; Haradome, Hiroki; Imamura, Yoshiki
2014-01-01
Diffusion imaging is a unique noninvasive tool to detect brain white matter trajectory and integrity in vivo. However, this technique suffers from spatial distortion and signal pileup or dropout originating from local susceptibility gradients and eddy currents. Although there are several methods to mitigate these problems, most techniques can be applicable either to susceptibility or eddy-current induced distortion alone with a few exceptions. The present study compared the correction efficiency of FSL tools, "eddy_correct" and the combination of "eddy" and "topup" in terms of diffusion-derived fractional anisotropy (FA). The brain diffusion images were acquired from 10 healthy subjects using 30 and 60 directions encoding schemes based on the electrostatic repulsive forces. For the 30 directions encoding, 2 sets of diffusion images were acquired with the same parameters, except for the phase-encode blips which had opposing polarities along the anteroposterior direction. For the 60 directions encoding, non-diffusion-weighted and diffusion-weighted images were obtained with forward phase-encoding blips and non-diffusion-weighted images with the same parameter, except for the phase-encode blips, which had opposing polarities. FA images without and with distortion correction were compared in a voxel-wise manner with tract-based spatial statistics. We showed that images corrected with eddy and topup possessed higher FA values than images uncorrected and corrected with eddy_correct with trilinear (FSL default setting) or spline interpolation in most white matter skeletons, using both encoding schemes. Furthermore, the 60 directions encoding scheme was superior as measured by increased FA values to the 30 directions encoding scheme, despite comparable acquisition time. This study supports the combination of eddy and topup as a superior correction tool in diffusion imaging rather than the eddy_correct tool, especially with trilinear interpolation, using 60 directions encoding scheme.
Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment
Mitchel, J.A.; Martin, I.S.
2013-01-01
A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
NASA Astrophysics Data System (ADS)
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Shearlet Features for Registration of Remotely Sensed Multitemporal Images
NASA Technical Reports Server (NTRS)
Murphy, James M.; Le Moigne, Jacqueline
2015-01-01
We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.
An Automatic Procedure for Combining Digital Images and Laser Scanner Data
NASA Astrophysics Data System (ADS)
Moussa, W.; Abdel-Wahab, M.; Fritsch, D.
2012-07-01
Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach comprises two methods for data fusion. The first method starts by a marker-free registration of digital images based on a point-based environment model (PEM) of a scene which stores the 3D laser scanner point clouds associated with intensity and RGB values. The PEM allows the extraction of accurate control information for the direct computation of absolute camera orientations with redundant information by means of accurate space resection methods. In order to use the computed relations between the digital images and the laser scanner data, an extended Helmert (seven-parameter) transformation is introduced and its parameters are estimated. Precedent to that, in the second method, the local relative orientation parameters of the camera images are calculated by means of an optimized Structure and Motion (SaM) reconstruction method. Then, using the determined transformation parameters results in having absolute oriented images in relation to the laser scanner data. With the resulting absolute orientations we have employed robust dense image reconstruction algorithms to create oriented dense image point clouds, which are automatically combined with the laser scanner data to form a complete detailed representation of a scene. Examples of different data sets are shown and experimental results demonstrate the effectiveness of the presented procedures.
Veldkamp, Wouter J H; Joemai, Raoul M S; van der Molen, Aart J; Geleijns, Jacob
2010-02-01
Metal prostheses cause artifacts in computed tomography (CT) images. The purpose of this work was to design an efficient and accurate metal segmentation in raw data to achieve artifact suppression and to improve CT image quality for patients with metal hip or shoulder prostheses. The artifact suppression technique incorporates two steps: metal object segmentation in raw data and replacement of the segmented region by new values using an interpolation scheme, followed by addition of the scaled metal signal intensity. Segmentation of metal is performed directly in sinograms, making it efficient and different from current methods that perform segmentation in reconstructed images in combination with Radon transformations. Metal signal segmentation is achieved by using a Markov random field model (MRF). Three interpolation methods are applied and investigated. To provide a proof of concept, CT data of five patients with metal implants were included in the study, as well as CT data of a PMMA phantom with Teflon, PVC, and titanium inserts. Accuracy was determined quantitatively by comparing mean Hounsfield (HU) values and standard deviation (SD) as a measure of distortion in phantom images with titanium (original and suppressed) and without titanium insert. Qualitative improvement was assessed by comparing uncorrected clinical images with artifact suppressed images. Artifacts in CT data of a phantom and five patients were automatically suppressed. The general visibility of structures clearly improved. In phantom images, the technique showed reduced SD close to the SD for the case where titanium was not inserted, indicating improved image quality. HU values in corrected images were different from expected values for all interpolation methods. Subtle differences between interpolation methods were found. The new artifact suppression design is efficient, for instance, in terms of preserving spatial resolution, as it is applied directly to original raw data. It successfully reduced artifacts in CT images of five patients and in phantom images. Sophisticated interpolation methods are needed to obtain reliable HU values close to the prosthesis.
The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †
Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji
2018-01-01
The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599
Method to directly radiolabel antibodies for diagnostic imaging and therapy
Thakur, Mathew L.
1994-01-01
The invention is a novel method and kit for directly radiolabeling proteins such as antibodies or antibody fragments for diagnostic and therapeutic purposes. The method comprises incubating a protein-containing solution with a solution of sodium ascorbate; adding a required quantity of reduced radionuclide to the incubated protein. A kit is also provided wherein the protein and/or reducing agents may be in lyophilized form.
Method to directly radiolabel antibodies for diagnostic imaging and therapy
Thakur, Mathew L.
1991-01-01
The invention is a novel method and kit for directly radiolabeling proteins such as antibodies or antibody fragments for diagnostic and therapeutic purposes. The method comprises incubating a protein-containing solution with a solution of sodium ascorbate; adding a required quantity of reduced radionuclide to the incubated protein. A kit is also provided wherein the protein and/or reducing agents may be in lyophilized form.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Zhang, Jun; Tian, Jinwen
2015-12-01
Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
A novel method to detect shadows on multispectral images
NASA Astrophysics Data System (ADS)
Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem
2016-10-01
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
Yamasaki, Jun; Kawai, Tomoyuki; Tanaka, Nobuo
2004-01-01
Spherical aberration (C(S))-corrected transmission electron microscopy (TEM) and annular dark-field scanning TEM (ADF-STEM) are applied to high-resolution observation of stacking faults in Si(1 - x)Ge(x) alloy films prepared on a Si(100) buffer layer by the chemical vapor deposition method. Both of the images clarify the individual nature of stacking faults from their directly interpretable image contrast and also by using image simulation in the case of the C(S)-corrected TEM. Positions of the atomic columns obtained in the ADF-STEM images almost agree with a projection of the theoretical model studied by Chou et al. (Phys. Rev. B 32(1985): 7979). Comparison between the C(S)-corrected TEM and ADF-STEM images shows that their resolution is at a similar level, but directly interpretable image contrast is obtained in ultrathin samples for C(S)-corrected TEM and in slightly thicker samples for ADF-STEM.
A framework for directional and higher-order reconstruction in photoacoustic tomography
NASA Astrophysics Data System (ADS)
Boink, Yoeri E.; Lagerwerf, Marinus J.; Steenbergen, Wiendelt; van Gils, Stephan A.; Manohar, Srirang; Brune, Christoph
2018-02-01
Photoacoustic tomography is a hybrid imaging technique that combines high optical tissue contrast with high ultrasound resolution. Direct reconstruction methods such as filtered back-projection, time reversal and least squares suffer from curved line artefacts and blurring, especially in the case of limited angles or strong noise. In recent years, there has been great interest in regularised iterative methods. These methods employ prior knowledge of the image to provide higher quality reconstructions. However, easy comparisons between regularisers and their properties are limited, since many tomography implementations heavily rely on the specific regulariser chosen. To overcome this bottleneck, we present a modular reconstruction framework for photoacoustic tomography, which enables easy comparisons between regularisers with different properties, e.g. nonlinear, higher-order or directional. We solve the underlying minimisation problem with an efficient first-order primal-dual algorithm. Convergence rates are optimised by choosing an operator-dependent preconditioning strategy. A variety of reconstruction methods are tested on challenging 2D synthetic and experimental data sets. They outperform direct reconstruction approaches for strong noise levels and limited angle measurements, offering immediate benefits in terms of acquisition time and quality. This work provides a basic platform for the investigation of future advanced regularisation methods in photoacoustic tomography.
NASA Astrophysics Data System (ADS)
Ai, Lingyu; Kim, Eun-Soo
2018-03-01
We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.
Chen, Yang; Budde, Adam; Li, Ke; Li, Yinsheng; Hsieh, Jiang; Chen, Guang-Hong
2017-01-01
When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section of the patient, or the patient needs to be positioned partially outside the SFOV for certain clinical applications, truncation artifacts often appear in the reconstructed CT images. Many truncation artifact correction methods perform extrapolations of the truncated projection data based on certain a priori assumptions. The purpose of this work was to develop a novel CT truncation artifact reduction method that directly operates on DICOM images. The blooming of pixel values associated with truncation was modeled using exponential decay functions, and based on this model, a discriminative dictionary was constructed to represent truncation artifacts and nonartifact image information in a mutually exclusive way. The discriminative dictionary consists of a truncation artifact subdictionary and a nonartifact subdictionary. The truncation artifact subdictionary contains 1000 atoms with different decay parameters, while the nonartifact subdictionary contains 1000 independent realizations of Gaussian white noise that are exclusive with the artifact features. By sparsely representing an artifact-contaminated CT image with this discriminative dictionary, the image was separated into a truncation artifact-dominated image and a complementary image with reduced truncation artifacts. The artifact-dominated image was then subtracted from the original image with an appropriate weighting coefficient to generate the final image with reduced artifacts. This proposed method was validated via physical phantom studies and retrospective human subject studies. Quantitative image evaluation metrics including the relative root-mean-square error (rRMSE) and the universal image quality index (UQI) were used to quantify the performance of the algorithm. For both phantom and human subject studies, truncation artifacts at the peripheral region of the SFOV were effectively reduced, revealing soft tissue and bony structure once buried in the truncation artifacts. For the phantom study, the proposed method reduced the relative RMSE from 15% (original images) to 11%, and improved the UQI from 0.34 to 0.80. A discriminative dictionary representation method was developed to mitigate CT truncation artifacts directly in the DICOM image domain. Both phantom and human subject studies demonstrated that the proposed method can effectively reduce truncation artifacts without access to projection data. © 2016 American Association of Physicists in Medicine.
Dose and image quality for a cone-beam C-arm CT system.
Fahrig, Rebecca; Dixon, Robert; Payne, Thomas; Morin, Richard L; Ganguly, Arundhuti; Strobel, Norbert
2006-12-01
We assess dose and image quality of a state-of-the-art angiographic C-arm system (Axiom Artis dTA, Siemens Medical Solutions, Forchheim, Germany) for three-dimensional neuro-imaging at various dose levels and tube voltages and an associated measurement method. Unlike conventional CT, the beam length covers the entire phantom, hence, the concept of computed tomography dose index (CTDI) is not the metric of choice, and one can revert to conventional dosimetry methods by directly measuring the dose at various points using a small ion chamber. This method allows us to define and compute a new dose metric that is appropriate for a direct comparison with the familiar CTDIw of conventional CT. A perception study involving the CATPHAN 600 indicates that one can expect to see at least the 9 mm inset with 0.5% nominal contrast at the recommended head-scan dose (60 mGy) when using tube voltages ranging from 70 kVp to 125 kVp. When analyzing the impact of tube voltage on image quality at a fixed dose, we found that lower tube voltages gave improved low contrast detectability for small-diameter objects. The relationships between kVp, image noise, dose, and contrast perception are discussed.
3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.
Moses, Yael; Shimshoni, Ilan
2009-07-01
We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.
A method of detection to the grinding wheel layer thickness based on computer vision
NASA Astrophysics Data System (ADS)
Ji, Yuchen; Fu, Luhua; Yang, Dujuan; Wang, Lei; Liu, Changjie; Wang, Zhong
2018-01-01
This paper proposed a method of detection to the grinding wheel layer thickness based on computer vision. A camera is used to capture images of grinding wheel layer on the whole circle. Forward lighting and back lighting are used to enables a clear image to be acquired. Image processing is then executed on the images captured, which consists of image preprocessing, binarization and subpixel subdivision. The aim of binarization is to help the location of a chord and the corresponding ring width. After subpixel subdivision, the thickness of the grinding layer can be calculated finally. Compared with methods usually used to detect grinding wheel wear, method in this paper can directly and quickly get the information of thickness. Also, the eccentric error and the error of pixel equivalent are discussed in this paper.
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
Star sub-pixel centroid calculation based on multi-step minimum energy difference method
NASA Astrophysics Data System (ADS)
Wang, Duo; Han, YanLi; Sun, Tengfei
2013-09-01
The star's centroid plays a vital role in celestial navigation, star images which be gotten during daytime, due to the strong sky background, have a low SNR, and the star objectives are nearly submerged in the background, takes a great trouble to the centroid localization. Traditional methods, such as a moment method, weighted centroid calculation method is simple but has a big error, especially in the condition of a low SNR. Gaussian method has a high positioning accuracy, but the computational complexity. Analysis of the energy distribution in star image, a location method for star target centroids based on multi-step minimum energy difference is proposed. This method uses the linear superposition to narrow the centroid area, in the certain narrow area uses a certain number of interpolation to pixels for the pixels' segmentation, and then using the symmetry of the stellar energy distribution, tentatively to get the centroid position: assume that the current pixel is the star centroid position, and then calculates and gets the difference of the sum of the energy which in the symmetric direction(in this paper we take the two directions of transverse and longitudinal) and the equal step length(which can be decided through different conditions, the paper takes 9 as the step length) of the current pixel, and obtain the centroid position in this direction when the minimum difference appears, and so do the other directions, then the validation comparison of simulated star images, and compare with several traditional methods, experiments shows that the positioning accuracy of the method up to 0.001 pixel, has good effect to calculate the centroid of low SNR conditions; at the same time, uses this method on a star map which got at the fixed observation site during daytime in near-infrared band, compare the results of the paper's method with the position messages which were known of the star, it shows that :the multi-step minimum energy difference method achieves a better effect.
Tagged Neutron Source for API Inspection Systems with Greatly Enhanced Spatial Resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-06-04
We recently developed induced fission and transmission imaging methods with time- and directionally-tagged neutrons offer new capabilities for characterization of fissile material configurations and enhanced detection of special nuclear materials (SNM). An Advanced Associated Particle Imaging (API) generator with higher angular resolution and neutron yield than existing systems is needed to fully exploit these methods.
Near-Infrared Coloring via a Contrast-Preserving Mapping Model.
Chang-Hwan Son; Xiao-Ping Zhang
2017-11-01
Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.
Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.
Okoshi, T; Oshima, K
1976-04-01
In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.
A practical material decomposition method for x-ray dual spectral computed tomography.
Hu, Jingjing; Zhao, Xing
2016-03-17
X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J; Park, C; Kauweloa, K
2015-06-15
Purpose: As an alternative to full tomographic imaging technique such as cone-beam computed tomography (CBCT), there is growing interest to adopt digital tomosynthesis (DTS) for the use of diagnostic as well as therapeutic applications. The aim of this study is to propose a new DTS system using novel orthogonal scanning technique, which can provide superior image quality DTS images compared to the conventional DTS scanning system. Methods: Unlike conventional DTS scanning system, the proposed DTS is reconstructed with two sets of orthogonal patient scans. 1) X-ray projections that are acquired along transverse trajectory and 2) an additional sets of X-raymore » projections acquired along the vertical direction at the mid angle of the previous transverse scan. To reconstruct DTS, we have used modified filtered backprojection technique to account for the different scanning directions of each projection set. We have evaluated the performance of our method using numerical planning CT data of liver cancer patient and a physical pelvis phantom experiment. The results were compared with conventional DTS techniques with single transverse and vertical scanning. Results: The experiments on both numerical simulation as well as physical experiment showed that the resolution as well as contrast of anatomical structures was much clearer using our method. Specifically, the image quality comparing with transversely scanned DTS showed that the edge and contrast of anatomical structures along Left-Right (LR) directions was comparable however, considerable discrepancy and enhancement could be observed along Superior-Inferior (SI) direction using our method. The opposite was observed when vertically scanned DTS was compared. Conclusion: In this study, we propose a novel DTS system using orthogonal scanning technique. The results indicated that the image quality of our novel DTS system was superior compared to conventional DTS system. This makes our DTS system potentially useful in various on-line clinical applications.« less
NASA Astrophysics Data System (ADS)
Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab
2015-12-01
Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.
NASA Astrophysics Data System (ADS)
Morency, Christina; Luo, Yang; Tromp, Jeroen
2011-05-01
The key issues in CO2 sequestration involve accurate monitoring, from the injection stage to the prediction and verification of CO2 movement over time, for environmental considerations. '4-D seismics' is a natural non-intrusive monitoring technique which involves 3-D time-lapse seismic surveys. Successful monitoring of CO2 movement requires a proper description of the physical properties of a porous reservoir. We investigate the importance of poroelasticity by contrasting poroelastic simulations with elastic and acoustic simulations. Discrepancies highlight a poroelastic signature that cannot be captured using an elastic or acoustic theory and that may play a role in accurately imaging and quantifying injected CO2. We focus on time-lapse crosswell imaging and model updating based on Fréchet derivatives, or finite-frequency sensitivity kernels, which define the sensitivity of an observable to the model parameters. We compare results of time-lapse migration imaging using acoustic, elastic (with and without the use of Gassmann's formulae) and poroelastic models. Our approach highlights the influence of using different physical theories for interpreting seismic data, and, more importantly, for extracting the CO2 signature from seismic waveforms. We further investigate the differences between imaging with the direct compressional wave, as is commonly done, versus using both direct compressional (P) and shear (S) waves. We conclude that, unlike direct P-wave traveltimes, a combination of direct P- and S-wave traveltimes constrains most parameters. Adding P- and S-wave amplitude information does not drastically improve parameter sensitivity, but it does improve spatial resolution of the injected CO2 zone. The main advantage of using a poroelastic theory lies in direct sensitivity to fluid properties. Simulations are performed using a spectral-element method, and finite-frequency sensitivity kernels are calculated using an adjoint method.
NASA Astrophysics Data System (ADS)
Wang, Xuchu; Niu, Yanmin
2011-02-01
Automatic measurement of vessels from fundus images is a crucial step for assessing vessel anomalies in ophthalmological community, where the change in retinal vessel diameters is believed to be indicative of the risk level of diabetic retinopathy. In this paper, a new retinal vessel diameter measurement method by combining vessel orientation estimation and filter response is proposed. Its interesting characteristics include: (1) different from the methods that only fit the vessel profiles, the proposed method extracts more stable and accurate vessel diameter by casting this problem as a maximal response problem of a variation of Gabor filter; (2) the proposed method can directly and efficiently estimate the vessel's orientation, which is usually captured by time-consuming multi-orientation fitting techniques in many existing methods. Experimental results shows that the proposed method both retains the computational simplicity and achieves stable and accurate estimation results.
Overlay of multiframe SEM images including nonlinear field distortions
NASA Astrophysics Data System (ADS)
Babin, S.; Borisov, S.; Ivonin, I.; Nakazawa, S.; Yamazaki, Y.
2018-03-01
To reduce charging and shrinkage, CD-SEMs utilize low electron energies and multiframe imaging. This results in every next frame being altered due to stage and beam instability, as well as due to charging. Regular averaging of the frames blurs the edges; this directly effects the extracted values of critical dimensions. A technique was developed to overlay multiframe images without the loss of quality. This method takes into account drift, rotation, and magnification corrections, as well as nonlinear distortions due to wafer charging. A significant improvement in the signal to noise ratio and overall image quality without degradation of the feature's edge quality was achieved. The developed software is capable of working with regular and large size images up to 32K pixels in each direction.
Video image stabilization and registration--plus
NASA Technical Reports Server (NTRS)
Hathaway, David H. (Inventor)
2009-01-01
A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.
ERIC Educational Resources Information Center
Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.
2012-01-01
A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…
From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper
NASA Astrophysics Data System (ADS)
Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal
2016-02-01
X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.
Power, J F
2009-06-01
Light profile microscopy (LPM) is a direct method for the spectral depth imaging of thin film cross-sections on the micrometer scale. LPM uses a perpendicular viewing configuration that directly images a source beam propagated through a thin film. Images are formed in dark field contrast, which is highly sensitive to subtle interfacial structures that are invisible to reference methods. The independent focusing of illumination and imaging systems allows multiple registered optical sources to be hosted on a single platform. These features make LPM a powerful multi-contrast (MC) imaging technique, demonstrated in this work with six modes of imaging in a single instrument, based on (1) broad-band elastic scatter; (2) laser excited wideband luminescence; (3) coherent elastic scatter; (4) Raman scatter (three channels with RGB illumination); (5) wavelength resolved luminescence; and (6) spectral broadband scatter, resolved in immediate succession. MC-LPM integrates Raman images with a wider optical and morphological picture of the sample than prior art microprobes. Currently, MC-LPM resolves images at an effective spectral resolution better than 9 cm(-1), at a spatial resolution approaching 1 microm, with optics that operate in air at half the maximum numerical aperture of the prior art microprobes.
Volumetric calibration of a plenoptic camera.
Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S
2018-02-01
The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.
Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo
2018-05-03
Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.
Application of morphological bit planes in retinal blood vessel extraction.
Fraz, M M; Basit, A; Barman, S A
2013-04-01
The appearance of the retinal blood vessels is an important diagnostic indicator of various clinical disorders of the eye and the body. Retinal blood vessels have been shown to provide evidence in terms of change in diameter, branching angles, or tortuosity, as a result of ophthalmic disease. This paper reports the development for an automated method for segmentation of blood vessels in retinal images. A unique combination of methods for retinal blood vessel skeleton detection and multidirectional morphological bit plane slicing is presented to extract the blood vessels from the color retinal images. The skeleton of main vessels is extracted by the application of directional differential operators and then evaluation of combination of derivative signs and average derivative values. Mathematical morphology has been materialized as a proficient technique for quantifying the retinal vasculature in ocular fundus images. A multidirectional top-hat operator with rotating structuring elements is used to emphasize the vessels in a particular direction, and information is extracted using bit plane slicing. An iterative region growing method is applied to integrate the main skeleton and the images resulting from bit plane slicing of vessel direction-dependent morphological filters. The approach is tested on two publicly available databases DRIVE and STARE. Average accuracy achieved by the proposed method is 0.9423 for both the databases with significant values of sensitivity and specificity also; the algorithm outperforms the second human observer in terms of precision of segmented vessel tree.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu
2014-01-01
Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.
NASA Astrophysics Data System (ADS)
Sunarya, I. Made Gede; Yuniarno, Eko Mulyanto; Purnomo, Mauridhi Hery; Sardjono, Tri Arief; Sunu, Ismoyo; Purnama, I. Ketut Eddy
2017-06-01
Carotid Artery (CA) is one of the vital organs in the human body. CA features that can be used are position, size and volume. Position feature can used to determine the preliminary initialization of the tracking. Examination of the CA features can use Ultrasound. Ultrasound imaging can be operated dependently by an skilled operator, hence there could be some differences in the images result obtained by two or more different operators. This can affect the process of determining of CA. To reduce the level of subjectivity among operators, it can determine the position of the CA automatically. In this study, the proposed method is to segment CA in B-Mode Ultrasound Image based on morphology, geometry and gradient direction. This study consists of three steps, the data collection, preprocessing and artery segmentation. The data used in this study were taken directly by the researchers and taken from the Brno university's signal processing lab database. Each data set contains 100 carotid artery B-Mode ultrasound image. Artery is modeled using ellipse with center c, major axis a and minor axis b. The proposed method has a high value on each data set, 97% (data set 1), 73 % (data set 2), 87% (data set 3). This segmentation results will then be used in the process of tracking the CA.
Studying depression using imaging and machine learning methods.
Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J
2016-01-01
Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.
Li, Jiansen; Song, Ying; Zhu, Zhen; Zhao, Jun
2017-05-01
Dual-dictionary learning (Dual-DL) method utilizes both a low-resolution dictionary and a high-resolution dictionary, which are co-trained for sparse coding and image updating, respectively. It can effectively exploit a priori knowledge regarding the typical structures, specific features, and local details of training sets images. The prior knowledge helps to improve the reconstruction quality greatly. This method has been successfully applied in magnetic resonance (MR) image reconstruction. However, it relies heavily on the training sets, and dictionaries are fixed and nonadaptive. In this research, we improve Dual-DL by using self-adaptive dictionaries. The low- and high-resolution dictionaries are updated correspondingly along with the image updating stage to ensure their self-adaptivity. The updated dictionaries incorporate both the prior information of the training sets and the test image directly. Both dictionaries feature improved adaptability. Experimental results demonstrate that the proposed method can efficiently and significantly improve the quality and robustness of MR image reconstruction.
Han, Seokmin; Kang, Dong-Goo
2014-01-01
An easily implementable tissue cancellation method for dual energy mammography is proposed to reduce anatomical noise and enhance lesion visibility. For dual energy calibration, the images of an imaging object are directly mapped onto the images of a customized calibration phantom. Each pixel pair of the low and high energy images of the imaging object was compared to pixel pairs of the low and high energy images of the calibration phantom. The correspondence was measured by absolute difference between the pixel values of imaged object and those of the calibration phantom. Then the closest pixel pair of the calibration phantom images is marked and selected. After the calibration using direct mapping, the regions with lesion yielded different thickness from the background tissues. Taking advantage of the different thickness, the visibility of cancerous lesions was enhanced with increased contrast-to-noise ratio, depending on the size of lesion and breast thickness. However, some tissues near the edge of imaged object still remained after tissue cancellation. These remaining residuals seem to occur due to the heel effect, scattering, nonparallel X-ray beam geometry and Poisson distribution of photons. To improve its performance further, scattering and the heel effect should be compensated.
Dark-field hyperspectral X-ray imaging
Egan, Christopher K.; Jacques, Simon D. M.; Connolley, Thomas; Wilson, Matthew D.; Veale, Matthew C.; Seller, Paul; Cernik, Robert J.
2014-01-01
In recent times, there has been a drive to develop non-destructive X-ray imaging techniques that provide chemical or physical insight. To date, these methods have generally been limited; either requiring raster scanning of pencil beams, using narrow bandwidth radiation and/or limited to small samples. We have developed a novel full-field radiographic imaging technique that enables the entire physio-chemical state of an object to be imaged in a single snapshot. The method is sensitive to emitted and scattered radiation, using a spectral imaging detector and polychromatic hard X-radiation, making it particularly useful for studying large dense samples for materials science and engineering applications. The method and its extension to three-dimensional imaging is validated with a series of test objects and demonstrated to directly image the crystallographic preferred orientation and formed precipitates across an aluminium alloy friction stir weld section. PMID:24808753
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
NASA Astrophysics Data System (ADS)
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
Sparse representations via learned dictionaries for x-ray angiogram image denoising
NASA Astrophysics Data System (ADS)
Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu
2018-03-01
X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.
Mubeen; K.R., Vijayalakshmi; Bhuyan, Sanat Kumar; Panigrahi, Rajat G; Priyadarshini, Smita R; Misra, Satyaranjan; Singh, Chandravir
2014-01-01
Objectives: The identification and radiographic interpretation of periapical bone lesions is important for accurate diagnosis and treatment. The present study was undertaken to study the feasibility and diagnostic accuracy of colour coded digital radiographs in terms of presence and size of lesion and to compare the diagnostic accuracy of colour coded digital images with direct digital images and conventional radiographs for assessing periapical lesions. Materials and Methods: Sixty human dry cadaver hemimandibles were obtained and periapical lesions were created in first and second premolar teeth at the junction of cancellous and cortical bone using a micromotor handpiece and carbide burs of sizes 2, 4 and 6. After each successive use of round burs, a conventional, RVG and colour coded image was taken for each specimen. All the images were evaluated by three observers. The diagnostic accuracy for each bur and image mode was calculated statistically. Results: Our results showed good interobserver (kappa > 0.61) agreement for the different radiographic techniques and for the different bur sizes. Conventional Radiography outperformed Digital Radiography in diagnosing periapical lesions made with Size two bur. Both were equally diagnostic for lesions made with larger bur sizes. Colour coding method was least accurate among all the techniques. Conclusion: Conventional radiography traditionally forms the backbone in the diagnosis, treatment planning and follow-up of periapical lesions. Direct digital imaging is an efficient technique, in diagnostic sense. Colour coding of digital radiography was feasible but less accurate however, this imaging technique, like any other, needs to be studied continuously with the emphasis on safety of patients and diagnostic quality of images. PMID:25584318
An Aggregated Method for Determining Railway Defects and Obstacle Parameters
NASA Astrophysics Data System (ADS)
Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat
2018-03-01
The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.
Research on self-calibration biaxial autocollimator based on ZYNQ
NASA Astrophysics Data System (ADS)
Guo, Pan; Liu, Bingguo; Liu, Guodong; Zhong, Yao; Lu, Binghui
2018-01-01
Autocollimators are mainly based on computers or the electronic devices that can be connected to the internet, and its precision, measurement range and resolution are all defective, and external displays are needed to display images in real time. What's more, there is no real-time calibration for autocollimator in the market. In this paper, we propose a biaxial autocollimator based on the ZYNQ embedded platform to solve the above problems. Firstly, the traditional optical system is improved and a light path is added for real-time calibration. Then, in order to improve measurement speed, the embedded platform based on ZYNQ that combines Linux operating system with autocollimator is designed. In this part, image acquisition, image processing, image display and the man-machine interaction interface based on Qt are achieved. Finally, the system realizes two-dimensional small angle measurement. Experimental results showed that the proposed method can improve the angle measurement accuracy. The standard deviation of the close distance (1.5m) is 0.15" in horizontal direction of image and 0.24"in vertical direction, the repeatability of measurement of the long distance (10m) is improved by 0.12 in horizontal direction of image and 0.3 in vertical direction.
Enhanced Imaging of Building Interior for Portable MIMO Through-the-wall Radar
NASA Astrophysics Data System (ADS)
Song, Yongping; Zhu, Jiahua; Hu, Jun; Jin, Tian; Zhou, Zhimin
2018-01-01
Portable multi-input multi-output (MIMO) radar system is able to imaging the building interior through aperture synthesis. However, significant grating lobes are invoked in the directly imaging results, which may deteriorate the imaging quality of other targets and influence the detail information extraction of imaging scene. In this paper, a two-stage coherence factor (CF) weighting method is proposed to enhance the imaging quality. After obtaining the sub-imaging results of each spatial sampling position using conventional CF approach, a window function is employed to calculate the proposed “enhanced CF” adaptive to the spatial variety effect behind the wall for the combination of these sub-images. The real data experiment illustrates the better performance of proposed method on grating lobes suppression and imaging quality enhancement compare to the traditional radar imaging approach.
Realistic tissue visualization using photoacoustic image
NASA Astrophysics Data System (ADS)
Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong
2018-02-01
Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.
Rapid Decimation for Direct Volume Rendering
NASA Technical Reports Server (NTRS)
Gibbs, Jonathan; VanGelder, Allen; Verma, Vivek; Wilhelms, Jane
1997-01-01
An approach for eliminating unnecessary portions of a volume when producing a direct volume rendering is described. This reduction in volume size sacrifices some image quality in the interest of rendering speed. Since volume visualization is often used as an exploratory visualization technique, it is important to reduce rendering times, so the user can effectively explore the volume. The methods presented can speed up rendering by factors of 2 to 3 with minor image degradation. A family of decimation algorithms to reduce the number of primitives in the volume without altering the volume's grid in any way is introduced. This allows the decimation to be computed rapidly, making it easier to change decimation levels on the fly. Further, because very little extra space is required, this method is suitable for the very large volumes that are becoming common. The method is also grid-independent, so it is suitable for multiple overlapping curvilinear and unstructured, as well as regular, grids. The decimation process can proceed automatically, or can be guided by the user so that important regions of the volume are decimated less than unimportant regions. A formal error measure is described based on a three-dimensional analog of the Radon transform. Decimation methods are evaluated based on this metric and on direct comparison with reference images.
Pixel-based meshfree modelling of skeletal muscles.
Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu
2016-01-01
This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A multiphase multichannel level set based segmentation framework is adopted for individual muscle segmentation using Magnetic Resonance Images (MRI) and DTI. The application of the proposed methods for modeling the human lower leg is demonstrated.
Attitude-error compensation for airborne down-looking synthetic-aperture imaging lidar
NASA Astrophysics Data System (ADS)
Li, Guang-yuan; Sun, Jian-feng; Zhou, Yu; Lu, Zhi-yong; Zhang, Guo; Cai, Guang-yu; Liu, Li-ren
2017-11-01
Target-coordinate transformation in the lidar spot of the down-looking synthetic-aperture imaging lidar (SAIL) was performed, and the attitude errors were deduced in the process of imaging, according to the principle of the airborne down-looking SAIL. The influence of the attitude errors on the imaging quality was analyzed theoretically. A compensation method for the attitude errors was proposed and theoretically verified. An airborne down-looking SAIL experiment was performed and yielded the same results. A point-by-point error-compensation method for solving the azimuthal-direction space-dependent attitude errors was also proposed.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
[Comparison of noise characteristics of direct and indirect conversion flat panel detectors].
Murai, Masami; Kishimoto, Kenji; Tanaka, Katsuhisa; Oota, Kenji; Ienaga, Akinori
2010-11-20
Flat-panel detector (FPD) digital radiography systems have direct and indirect conversion systems, and the 2 conversion systems provide different imaging performances. We measured some imaging performances [input-output characteristic, presampled modulation transfer function (presampled MTF), noise power spectrum (NPS)] of direct and indirect FPD systems. Moreover, some image samples of the NPSs were visually evaluated by the pair comparison method. As a result, the presampled MTF of the direct FPD system was substantially higher than that of the indirect FPD system. The NPS of the direct FPD system had a high value for all spatial frequencies. In contrast, the NPS of the indirect FPD system had a lower value as the frequency became higher. The results of visual evaluations showed the same tendency as that found for NPSs. We elucidated the cause of the difference in NPSs in a simulation study, and we determined that the cause of the difference in the noise components of the direct and indirect FPD systems was closely related to the presampled MTF.
1997-04-30
Currently there are no systems available which allow for economical and accurate subsurface imaging of remediation sites. In some cases, high...system to address this need. This project has been very successful in showing a promising new direction for high resolution subsurface imaging . Our
Imaging of high-velocity very small subjects
NASA Astrophysics Data System (ADS)
Haddleton, Graham P.
1993-01-01
The imaging of high velocity (> 2000 m/s), 7 mm Cuboids impacting on various targets is discussed. The reasons why conventional H.S. Cine techniques, even framing at 40,000 pps, are inadequate to record the detail required are outlined. Four different methods of image capture are illustrated giving a direct comparison between state-of-the-art technologies.
Groupwise Image Registration Guided by a Dynamic Digraph of Images.
Tang, Zhenyu; Fan, Yong
2016-04-01
For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.
Carboranylporphyrins and uses thereof
Wu, Haitao; Miura, Michiko
2006-02-07
The present invention is directed to low toxicity boronated compounds and methods for their use in the treatment, visualization, and diagnosis of tumors. More specifically, the present invention is directed to low toxicity carborane-containing 5, 10, 15, 20-tetraphenylporphyrin compounds and methods for their use particularly in boron neutron capture therapy (BNCT) and photodynamic therapy (PDT) for the treatment of tumors of the brain, head and neck, and surrounding tissue. The invention is also directed to using these carborane-containing tetraphenyl porphyrin compounds to methods of tumor imaging and/or diagnosis such as MRI, SPECT, or PET.
Carboranylporphyrins and uses thereof
Wu, Haitao; Miura, Michiko
2006-01-24
The present invention is directed to low toxicity boronated compounds and methods for their use in the treatment, visualization, and diagnosis of tumors. More specifically, the present invention is directed to low toxicity carborane-containing 5, 10, 15, 20-tetraphenylporphyrin compounds and methods for their use particularly in boron neutron capture therapy (BNCT) and photodynamic therapy (PDT) for the treatment of tumors of the brain, head, neck, and surrounding tissue. The invention is also directed to using these carborane-containing tetraphenyl porphyrin compounds to methods of tumor imaging and/or diagnosis such as MRI, SPECT, or PET.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonneville, Alain H.; Kouzes, Richard T.
Imaging subsurface geological formations, oil and gas reservoirs, mineral deposits, cavities or magma chambers under active volcanoes has been for many years a major quest of geophysicists and geologists. Since these objects cannot be observed directly, different indirect geophysical methods have been developed. They are all based on variations of certain physical properties of the subsurface that can be detected from the ground surface or from boreholes. Electrical resistivity, seismic wave’s velocities and density are certainly the most used properties. If we look at density, indirect estimates of density distributions are performed currently by seismic reflection methods - since themore » velocity of seismic waves depend also on density - but they are expensive and discontinuous in time. Direct estimates of density are performed using gravimetric data looking at variations of the gravity field induced by the density variations at depth but this is not sufficiently accurate. A new imaging technique using cosmic-ray muon detectors has emerged during the last decade and muon tomography - or muography - promises to provide, for the first time, a complete and precise image of the density distribution in the subsurface. Further, this novel approach has the potential to become a direct, real-time, and low-cost method for monitoring fluid displacement in subsurface reservoirs.« less
NASA Astrophysics Data System (ADS)
Xing, Fangxu; Ye, Chuyang; Woo, Jonghye; Stone, Maureen; Prince, Jerry
2015-03-01
The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue's motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles' activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject's tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients' muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.
Research and Analysis of Image Processing Technologies Based on DotNet Framework
NASA Astrophysics Data System (ADS)
Ya-Lin, Song; Chen-Xi, Bai
Microsoft.Net is a kind of most popular program development tool. This paper gave a detailed analysis concluded about some image processing technologies of the advantages and disadvantages by .Net processed image while the same algorithm is used in Programming experiments. The result shows that the two best efficient methods are unsafe pointer and Direct 3D, and Direct 3D used to 3D simulation development, and the others are useful in some fields while these technologies are poor efficiency and not suited to real-time processing. The experiment results in paper will help some projects about image processing and simulation based DotNet and it has strong practicability.
Visualization of Electrical Field of Electrode Using Voltage-Controlled Fluorescence Release
Jia, Wenyan; Wu, Jiamin; Gao, Di; Wang, Hao; Sun, Mingui
2016-01-01
In this study we propose an approach to directly visualize electrical current distribution at the electrode-electrolyte interface of a biopotential electrode. High-speed fluorescent microscopic images are acquired when an electric potential is applied across the interface to trigger the release of fluorescent material from the surface of the electrode. These images are analyzed computationally to obtain the distribution of the electric field from the fluorescent intensity of each pixel. Our approach allows direct observation of microscopic electrical current distribution around the electrode. Experiments are conducted to validate the feasibility of the fluorescent imaging method. PMID:27253615
3D surface voxel tracing corrector for accurate bone segmentation.
Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi
2018-06-18
For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two state-of-the-art methods.
Multi-modal Registration for Correlative Microscopy using Image Analogies
Cao, Tian; Zach, Christopher; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies for the same biological specimen. In this paper, we propose an image registration method for correlative microscopy, which is challenging due to the distinct appearance of biological structures when imaged with different modalities. Our method is based on image analogies and allows to transform images of a given modality into the appearance-space of another modality. Hence, the registration between two different types of microscopy images can be transformed to a mono-modality image registration. We use a sparse representation model to obtain image analogies. The method makes use of corresponding image training patches of two different imaging modalities to learn a dictionary capturing appearance relations. We test our approach on backscattered electron (BSE) scanning electron microscopy (SEM)/confocal and transmission electron microscopy (TEM)/confocal images. We perform rigid, affine, and deformable registration via B-splines and show improvements over direct registration using both mutual information and sum of squared differences similarity measures to account for differences in image appearance. PMID:24387943
Quantitative assessment of image motion blur in diffraction images of moving biological cells
NASA Astrophysics Data System (ADS)
Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua
2016-02-01
Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.
Hard X-ray imaging spectroscopy of FOXSI microflares
NASA Astrophysics Data System (ADS)
Glesener, Lindsay; Krucker, Sam; Christe, Steven; Buitrago-Casas, Juan Camilo; Ishikawa, Shin-nosuke; Foster, Natalie
2015-04-01
The ability to investigate particle acceleration and hot thermal plasma in solar flares relies on hard X-ray imaging spectroscopy using bremsstrahlung emission from high-energy electrons. Direct focusing of hard X-rays (HXRs) offers the ability to perform cleaner imaging spectroscopy of this emission than has previously been possible. Using direct focusing, spectra for different sources within the same field of view can be obtained easily since each detector segment (pixel or strip) measures the energy of each photon interacting within that segment. The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload has successfully completed two flights, observing microflares each time. Flare images demonstrate an instrument imaging dynamic range far superior to the indirect methods of previous instruments like the RHESSI spacecraft.In this work, we present imaging spectroscopy of microflares observed by FOXSI in its two flights. Imaging spectroscopy performed on raw FOXSI images reveals the temperature structure of flaring loops, while more advanced techniques such as deconvolution of the point spread function produce even more detailed images.
NASA Astrophysics Data System (ADS)
Li, Xiaosong; Li, Huafeng; Yu, Zhengtao; Kong, Yingchun
2015-07-01
An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.
Diffusion imaging quality control via entropy of principal direction distribution.
Farzinfar, Mahshid; Oguz, Ipek; Smith, Rachel G; Verde, Audrey R; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C; Paterson, Sarah; Evans, Alan C; Styner, Martin A
2013-11-15
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, "venetian blind" artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. Copyright © 2013 Elsevier Inc. All rights reserved.
Diffusion imaging quality control via entropy of principal direction distribution
Oguz, Ipek; Smith, Rachel G.; Verde, Audrey R.; Dietrich, Cheryl; Gupta, Aditya; Escolar, Maria L.; Piven, Joseph; Pujol, Sonia; Vachet, Clement; Gouttard, Sylvain; Gerig, Guido; Dager, Stephen; McKinstry, Robert C.; Paterson, Sarah; Evans, Alan C.; Styner, Martin A.
2013-01-01
Diffusion MR imaging has received increasing attention in the neuroimaging community, as it yields new insights into the microstructural organization of white matter that are not available with conventional MRI techniques. While the technology has enormous potential, diffusion MRI suffers from a unique and complex set of image quality problems, limiting the sensitivity of studies and reducing the accuracy of findings. Furthermore, the acquisition time for diffusion MRI is longer than conventional MRI due to the need for multiple acquisitions to obtain directionally encoded Diffusion Weighted Images (DWI). This leads to increased motion artifacts, reduced signal-to-noise ratio (SNR), and increased proneness to a wide variety of artifacts, including eddy-current and motion artifacts, “venetian blind” artifacts, as well as slice-wise and gradient-wise inconsistencies. Such artifacts mandate stringent Quality Control (QC) schemes in the processing of diffusion MRI data. Most existing QC procedures are conducted in the DWI domain and/or on a voxel level, but our own experiments show that these methods often do not fully detect and eliminate certain types of artifacts, often only visible when investigating groups of DWI's or a derived diffusion model, such as the most-employed diffusion tensor imaging (DTI). Here, we propose a novel regional QC measure in the DTI domain that employs the entropy of the regional distribution of the principal directions (PD). The PD entropy quantifies the scattering and spread of the principal diffusion directions and is invariant to the patient's position in the scanner. High entropy value indicates that the PDs are distributed relatively uniformly, while low entropy value indicates the presence of clusters in the PD distribution. The novel QC measure is intended to complement the existing set of QC procedures by detecting and correcting residual artifacts. Such residual artifacts cause directional bias in the measured PD and here called dominant direction artifacts. Experiments show that our automatic method can reliably detect and potentially correct such artifacts, especially the ones caused by the vibrations of the scanner table during the scan. The results further indicate the usefulness of this method for general quality assessment in DTI studies. PMID:23684874
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
Remote Sensing of Soils for Environmental Assessment and Management.
NASA Technical Reports Server (NTRS)
DeGloria, Stephen D.; Irons, James R.; West, Larry T.
2014-01-01
The next generation of imaging systems integrated with complex analytical methods will revolutionize the way we inventory and manage soil resources across a wide range of scientific disciplines and application domains. This special issue highlights those systems and methods for the direct benefit of environmental professionals and students who employ imaging and geospatial information for improved understanding, management, and monitoring of soil resources.
Deconvolving the wedge: maximum-likelihood power spectra via spherical-wave visibility modelling
NASA Astrophysics Data System (ADS)
Ghosh, A.; Mertens, F. G.; Koopmans, L. V. E.
2018-03-01
Direct detection of the Epoch of Reionization (EoR) via the red-shifted 21-cm line will have unprecedented implications on the study of structure formation in the infant Universe. To fulfil this promise, current and future 21-cm experiments need to detect this weak EoR signal in the presence of foregrounds that are several orders of magnitude larger. This requires extreme noise control and improved wide-field high dynamic-range imaging techniques. We propose a new imaging method based on a maximum likelihood framework which solves for the interferometric equation directly on the sphere, or equivalently in the uvw-domain. The method uses the one-to-one relation between spherical waves and spherical harmonics (SpH). It consistently handles signals from the entire sky, and does not require a w-term correction. The SpH coefficients represent the sky-brightness distribution and the visibilities in the uvw-domain, and provide a direct estimate of the spatial power spectrum. Using these spectrally smooth SpH coefficients, bright foregrounds can be removed from the signal, including their side-lobe noise, which is one of the limiting factors in high dynamics-range wide-field imaging. Chromatic effects causing the so-called `wedge' are effectively eliminated (i.e. deconvolved) in the cylindrical (k⊥, k∥) power spectrum, compared to a power spectrum computed directly from the images of the foreground visibilities where the wedge is clearly present. We illustrate our method using simulated Low-Frequency Array observations, finding an excellent reconstruction of the input EoR signal with minimal bias.
A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei
2016-03-01
Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.
NASA Astrophysics Data System (ADS)
Chen, Hu; Zhang, Yi; Zhou, Jiliu; Wang, Ge
2017-09-01
Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods. Especially, our method has been favorably evaluated in terms of noise suppression and structural preservation.
Peng, Shao-Hu; Kim, Deok-Hwan; Lee, Seok-Lyong; Lim, Myung-Kwan
2010-01-01
Texture feature is one of most important feature analysis methods in the computer-aided diagnosis (CAD) systems for disease diagnosis. In this paper, we propose a Uniformity Estimation Method (UEM) for local brightness and structure to detect the pathological change in the chest CT images. Based on the characteristics of the chest CT images, we extract texture features by proposing an extension of rotation invariant LBP (ELBP(riu4)) and the gradient orientation difference so as to represent a uniform pattern of the brightness and structure in the image. The utilization of the ELBP(riu4) and the gradient orientation difference allows us to extract rotation invariant texture features in multiple directions. Beyond this, we propose to employ the integral image technique to speed up the texture feature computation of the spatial gray level dependent method (SGLDM). Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Kasahara, A.; Yagi, Y.
2017-12-01
The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.
Ultrasonically modulated x-ray phase contrast and vibration potential imaging methods
NASA Astrophysics Data System (ADS)
Hamilton, Theron J.; Cao, Guohua; Wang, Shougang; Bailat, Claude J.; Nguyen, Cuong K.; Li, Shengqiong; Gehring, Stephan; Wands, Jack; Gusev, Vitalyi; Rose-Petruck, Christoph; Diebold, Gerald J.
2006-02-01
We show that the radiation pressure exerted by a beam of ultrasound can be used for contrast enhancement in high resolution x-ray imaging of tissue. Interfacial features of objects are highlighted as a result of both the displacement introduced by the ultrasound and the inherent sensitivity of x-ray phase contrast imaging to density variations. The potential of the method is demonstrated by imaging various tumor phantoms and tumors from mice. The directionality of the acoustic radiation force and its localization in space permits the imaging of ultrasound-selected tissue volumes. In a related effort we report progress on development of an imaging technique using and electrokinetic effect known as the ultrasonic vibration potential. The ultrasonic vibration potential refers to the voltage generated when ultrasound traverses a colloidal or ionic fluid. The theory of imaging based on the vibration potential is reviewed, and an expression given that describes the signal from an arbitrary object. The experimental apparatus consists of a pair of parallel plates connected to the irradiated body, a low noise preamplifier, a radio frequency lock-in amplifier, translation stages for the ultrasonic transducer that generates the ultrasound, and a computer for data storage and image formation. Experiments are reported where bursts of ultrasound are directed onto colloidal silica objects placed within inert bodies.
Diffusion tensor imaging of the sural nerve in normal controls☆
Kim, Boklye; Srinivasan, Ashok; Sabb, Brian; Feldman, Eva L; Pop-Busui, Rodica
2016-01-01
Objective To develop a diffusion tensor imaging (DTI) protocol for assessing the sural nerve in healthy subjects. Methods Sural nerves in 25 controls were imaged using DTI at 3 T with 6, 15, and 32 gradient directions. Fractional anisotropy (FA) and apparent diffusion coefficient (ADC) were computed from nerve regions of interest co-registered with T2-weighted images. Results Coronal images with 0.5(RL)×2.0(FH)×0.5(AP) mm3 resolution successfully localized the sural nerve. FA maps showed less variability with 32 directions (0.559±0.071) compared to 15(0.590±0.080) and 6(0.659±0.109). Conclusions Our DTI protocol was effective in imaging sural nerves in controls to establish normative FA/ADC, with potential to be used non-invasively in diseased nerves of patients. PMID:24908367
Quantitative single-molecule imaging by confocal laser scanning microscopy.
Vukojevic, Vladana; Heidkamp, Marcus; Ming, Yu; Johansson, Björn; Terenius, Lars; Rigler, Rudolf
2008-11-25
A new approach to quantitative single-molecule imaging by confocal laser scanning microscopy (CLSM) is presented. It relies on fluorescence intensity distribution to analyze the molecular occurrence statistics captured by digital imaging and enables direct determination of the number of fluorescent molecules and their diffusion rates without resorting to temporal or spatial autocorrelation analyses. Digital images of fluorescent molecules were recorded by using fast scanning and avalanche photodiode detectors. In this way the signal-to-background ratio was significantly improved, enabling direct quantitative imaging by CLSM. The potential of the proposed approach is demonstrated by using standard solutions of fluorescent dyes, fluorescently labeled DNA molecules, quantum dots, and the Enhanced Green Fluorescent Protein in solution and in live cells. The method was verified by using fluorescence correlation spectroscopy. The relevance for biological applications, in particular, for live cell imaging, is discussed.
Contrast-guided image interpolation.
Wei, Zhe; Ma, Kai-Kuang
2013-11-01
In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.
Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.
Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire
2018-01-01
Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.
Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.
Ristic, Dejan; Sanchez, Humberto; Wyman, Claire
2011-01-01
Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
NASA Astrophysics Data System (ADS)
Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli
2016-10-01
Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.
Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Belikov, Ruslan
2016-01-01
Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.
Song, Pengfei; Manduca, Armando; Zhao, Heng; Urban, Matthew W.; Greenleaf, James F.; Chen, Shigao
2014-01-01
A fast shear compounding method was developed in this study using only one shear wave push-detect cycle, such that the shear wave imaging frame rate is preserved and motion artifacts are minimized. The proposed method is composed of the following steps: 1. applying a comb-push to produce multiple differently angled shear waves at different spatial locations simultaneously; 2. decomposing the complex shear wave field into individual shear wave fields with differently oriented shear waves using a multi-directional filter; 3. using a robust two-dimensional (2D) shear wave speed calculation to reconstruct 2D shear elasticity maps from each filter direction; 4. compounding these 2D maps from different directions into a final map. An inclusion phantom study showed that the fast shear compounding method could achieve comparable performance to conventional shear compounding without sacrificing the imaging frame rate. A multi-inclusion phantom experiment showed that the fast shear compounding method could provide a full field-of-view (FOV), 2D, and compounded shear elasticity map with three types of inclusions clearly resolved and stiffness measurements showing excellent agreement to the nominal values. PMID:24613636
NASA Astrophysics Data System (ADS)
Miao, Jianwei; Ishikawa, Tetsuya; Shen, Qun; Earnest, Thomas
2008-05-01
In 1999, researchers extended X-ray crystallography to allow the imaging of noncrystalline specimens by measuring the X-ray diffraction pattern of a noncrystalline specimen and then directly phasing it using the oversampling method with iterative algorithms. Since then, the field has evolved moving in three important directions. The first is the 3D structural determination of noncrystalline materials, which includes the localization of the defects and strain field inside nanocrystals, and quantitative 3D imaging of disordered materials such as nanoparticles and biomaterials. The second is the 3D imaging of frozen-hydrated whole cells at a resolution of 10 nm or better. A main thrust is to localize specific multiprotein complexes inside cells. The third is the potential of imaging single large protein complexes using extremely intense and ultrashort X-ray pulses. In this article, we review the principles of this methodology, summarize recent developments in each of the three directions, and illustrate a few examples.
Study on the influence factors of camouflage target polarization detection
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Chen, Lei; Li, Xia; Wu, Wenyuan
2016-10-01
The degree of linear polarization (DOLP) expressions at any polarizer direction (PD) was deduced based on the Stokes vector and Mueller matrix. The outdoors experiments were carried out to demonstrate the expressions. This paper mainly explored the DOLP-image-Contrast (DOLPC) between the target image and the background image, and the PD and RGB waveband that be considered two important influence factors were studied for camouflage target polarization detection. It was found that the DOLPC of target and background was obviously higher than intensity image. When setting the reference direction that polarizer was perpendicular to the incident face, the DOLP image of interval angle 60 degree between PD and reference direction had relatively high DOLPC, the interval angle 45 degree was the second, and the interval angle 35 degree was the third. The outdoors polarization detection experiment of controlling waveband showed that the DOLPC results was significantly different to use 650nm, 550nm and 450nm waveband, and the polarization detection performance by using 650nm band was an optimization method.
Defect detection of castings in radiography images using a robust statistical feature.
Zhao, Xinyue; He, Zaixing; Zhang, Shuyou
2014-01-01
One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.
NASA Astrophysics Data System (ADS)
Kim, Soo Jeong; Lee, Dong Hyuk; Song, Inchang; Kim, Nam Gook; Park, Jae-Hyeung; Kim, JongHyo; Han, Man Chung; Min, Byong Goo
1998-07-01
Phase-contrast (PC) method of magnetic resonance imaging (MRI) has bee used for quantitative measurements of flow velocity and volume flow rate. It is a noninvasive technique which provides an accurate two-dimensional velocity image. Moreover, Phase Contrast Cine magnetic resonance imaging combines the flow dependent contrast of PC-MRI with the ability of cardiac cine imaging to produce images throughout the cardiac cycle. However, the accuracy of the data acquired from the single through-plane velocity encoding can be reduced by the effect of flow direction, because in many practical cases flow directions are not uniform throughout the whole region of interest. In this study, we present dynamic three-dimensional velocity vector mapping method using PC-MRI which can visualize the complex flow pattern through 3D volume rendered images displayed dynamically. The direction of velocity mapping can be selected along any three orthogonal axes. By vector summation, the three maps can be combined to form a velocity vector map that determines the velocity regardless of the flow direction. At the same time, Cine method is used to observe the dynamic change of flow. We performed a phantom study to evaluate the accuracy of the suggested PC-MRI in continuous and pulsatile flow measurement. Pulsatile flow wave form is generated by the ventricular assistant device (VAD), HEMO-PULSA (Biomedlab, Seoul, Korea). We varied flow velocity, pulsatile flow wave form, and pulsing rate. The PC-MRI-derived velocities were compared with Doppler-derived results. The velocities of the two measurements showed a significant linear correlation. Dynamic three-dimensional velocity vector mapping was carried out for two cases. First, we applied to the flow analysis around the artificial heart valve in a flat phantom. We could observe the flow pattern around the valve through the 3-dimensional cine image. Next, it is applied to the complex flow inside the polymer sac that is used as ventricle in totally implantable artificial heart (TAH). As a result we could observe the flow pattern around the valves of the sac, though complex flow can not be detected correctly in the conventional phase contrast method. In addition, we could calculate the cardiac output from TAH sac by quantitative measurement of the volume of flow across the outlet valve.
Ye, Chuyang; Murano, Emi; Stone, Maureen; Prince, Jerry L
2015-10-01
The tongue is a critical organ for a variety of functions, including swallowing, respiration, and speech. It contains intrinsic and extrinsic muscles that play an important role in changing its shape and position. Diffusion tensor imaging (DTI) has been used to reconstruct tongue muscle fiber tracts. However, previous studies have been unable to reconstruct the crossing fibers that occur where the tongue muscles interdigitate, which is a large percentage of the tongue volume. To resolve crossing fibers, multi-tensor models on DTI and more advanced imaging modalities, such as high angular resolution diffusion imaging (HARDI) and diffusion spectrum imaging (DSI), have been proposed. However, because of the involuntary nature of swallowing, there is insufficient time to acquire a sufficient number of diffusion gradient directions to resolve crossing fibers while the in vivo tongue is in a fixed position. In this work, we address the challenge of distinguishing interdigitated tongue muscles from limited diffusion magnetic resonance imaging by using a multi-tensor model with a fixed tensor basis and incorporating prior directional knowledge. The prior directional knowledge provides information on likely fiber directions at each voxel, and is computed with anatomical knowledge of tongue muscles. The fiber directions are estimated within a maximum a posteriori (MAP) framework, and the resulting objective function is solved using a noise-aware weighted ℓ1-norm minimization algorithm. Experiments were performed on a digital crossing phantom and in vivo tongue diffusion data including three control subjects and four patients with glossectomies. On the digital phantom, effects of parameters, noise, and prior direction accuracy were studied, and parameter settings for real data were determined. The results on the in vivo data demonstrate that the proposed method is able to resolve interdigitated tongue muscles with limited gradient directions. The distributions of the computed fiber directions in both the controls and the patients were also compared, suggesting a potential clinical use for this imaging and image analysis methodology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Communication: Time- and space-sliced velocity map electron imaging
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Fan, Lin; Winney, Alexander H.; Li, Wen
2014-12-01
We develop a new method to achieve slice electron imaging using a conventional velocity map imaging apparatus with two additional components: a fast frame complementary metal-oxide semiconductor camera and a high-speed digitizer. The setup was previously shown to be capable of 3D detection and coincidence measurements of ions. Here, we show that when this method is applied to electron imaging, a time slice of 32 ps and a spatial slice of less than 1 mm thick can be achieved. Each slice directly extracts 3D velocity distributions of electrons and provides electron velocity distributions that are impossible or difficult to obtain with a standard 2D imaging electron detector.
Modeling human faces with multi-image photogrammetry
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2002-03-01
Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.
NASA Astrophysics Data System (ADS)
Yeboah-Forson, Albert; Comas, Xavier; Whitman, Dean
2014-07-01
The limestone composing the Biscayne Aquifer in southeast Florida is characterized by cavities and solution features that are difficult to detect and quantify accurately because of their heterogeneous spatial distribution. Such heterogeneities have been shown by previous studies to exert a strong influence in the direction of groundwater flow. In this study we use an integrated array of geophysical methods to detect the lateral extent and distribution of solution features as indicative of anisotropy in the Biscayne Aquifer. Geophysical methods included azimuthal resistivity measurements, electrical resistivity imaging (ERI) and ground penetrating radar (GPR) and were constrained with direct borehole information from nearby wells. The geophysical measurements suggest the presence of a zone of low electrical resistivity (from ERI) and low electromagnetic wave velocity (from GPR) below the water table at depths of 4-9 m that corresponds to the depth of solution conduits seen in digital borehole images. Azimuthal electrical measurements at the site reported coefficients of electrical anisotropy as high as 1.36 suggesting the presence of an area of high porosity (most likely comprising different types of porosity) oriented in the E-W direction. This study shows how integrated geophysical methods can help detect the presence of areas of enhanced porosity which may influence the direction of groundwater flow in a complex anisotropic and heterogeneous karst system like the Biscayne Aquifer.
NASA Astrophysics Data System (ADS)
Qian, Tingting; Wang, Lianlian; Lu, Guanghua
2017-07-01
Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.
Method for detecting a mass density image of an object
Wernick, Miles N [Chicago, IL; Yang, Yongyi [Westmont, IL
2008-12-23
A method for detecting a mass density image of an object. An x-ray beam is transmitted through the object and a transmitted beam is emitted from the object. The transmitted beam is directed at an angle of incidence upon a crystal analyzer. A diffracted beam is emitted from the crystal analyzer onto a detector and digitized. A first image of the object is detected from the diffracted beam emitted from the crystal analyzer when positioned at a first angular position. A second image of the object is detected from the diffracted beam emitted from the crystal analyzer when positioned at a second angular position. A refraction image is obtained and a regularized mathematical inversion algorithm is applied to the refraction image to obtain a mass density image.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michel, D. T.; Davis, A. K.; Armstrong, W.
Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less
Michel, D. T.; Davis, A. K.; Armstrong, W.; ...
2015-07-08
Self-emission x-ray shadowgraphy provides a method to measure the ablation-front trajectory and low-mode nonuniformity of a target imploded by directly illuminating a fusion capsule with laser beams. The technique uses time-resolved images of soft x-rays (> 1 keV) emitted from the coronal plasma of the target imaged onto an x-ray framing camera to determine the position of the ablation front. Methods used to accurately measure the ablation-front radius (more » $${\\it\\delta}R=\\pm 1.15~{\\rm\\mu}\\text{m}$$), image-to-image timing ($${\\it\\delta}({\\rm\\Delta}t)=\\pm 2.5$$ ps) and absolute timing ($${\\it\\delta}t=\\pm 10$$ ps) are presented. Angular averaging of the images provides an average radius measurement of$${\\it\\delta}(R_{\\text{av}})=\\pm 0.15~{\\rm\\mu}\\text{m}$$and an error in velocity of$${\\it\\delta}V/V=\\pm 3\\%$$. This technique was applied on the Omega Laser Facility and the National Ignition Facility.« less
Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu
2014-03-20
A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
New presentation method for magnetic resonance angiography images based on skeletonization
NASA Astrophysics Data System (ADS)
Nystroem, Ingela; Smedby, Orjan
2000-04-01
Magnetic resonance angiography (MRA) images are usually presented as maximum intensity projections (MIP), and the choice of viewing direction is then critical for the detection of stenoses. We propose a presentation method that uses skeletonization and distance transformations, which visualizes variations in vessel width independent of viewing direction. In the skeletonization, the object is reduced to a surface skeleton and further to a curve skeleton. The skeletal voxels are labeled with their distance to the original background. For the curve skeleton, the distance values correspond to the minimum radius of the object at that point, i.e., half the minimum diameter of the blood vessel at that level. The following image processing steps are performed: resampling to cubic voxels, segmentation of the blood vessels, skeletonization ,and reverse distance transformation on the curve skeleton. The reconstructed vessels may be visualized with any projection method. Preliminary results are shown. They indicate that locations of possible stenoses may be identified by presenting the vessels as a structure with the minimum radius at each point.
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.
2015-01-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069
Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang
2016-07-01
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction
NASA Astrophysics Data System (ADS)
Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.
2017-10-01
One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.
Systems and methods for thermal imaging technique for measuring mixing of fluids
Booten, Charles; Tomerlin, Jeff; Winkler, Jon
2016-06-14
Systems and methods for thermal imaging for measuring mixing of fluids are provided. In one embodiment, a method for measuring mixing of gaseous fluids using thermal imaging comprises: positioning a thermal test medium parallel to a direction gaseous fluid flow from an outlet vent of a momentum source, wherein when the source is operating, the fluid flows across a surface of the medium; obtaining an ambient temperature value from a baseline thermal image of the surface; obtaining at least one operational thermal image of the surface when the fluid is flowing from the outlet vent across the surface, wherein the fluid has a temperature different than the ambient temperature; and calculating at least one temperature-difference fraction associated with at least a first position on the surface based on a difference between temperature measurements obtained from the at least one operational thermal image and the ambient temperature value.
Wear Detection of Drill Bit by Image-based Technique
NASA Astrophysics Data System (ADS)
Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul
2018-03-01
Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.
Siegert, F; Weijer, C J; Nomura, A; Miike, H
1994-01-01
We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.
NASA Astrophysics Data System (ADS)
Torre, Gabriele; Schwartz, Richard; Piana, Michele; Massone, Anna Maria; Benvenuto, Federico
2016-05-01
The fine spatial resolution of the SDO AIA CCD's is often destroyed by the charge in saturated pixels overflowing into a swath of neighboring cells during fast rising solar flares. Automated exposure control can only mitigate this issue to a degree and it has other deleterious effects. Our method addresses the desaturation problem for AIA images as an image reconstruction problem in which the information content of the diffraction fringes, generated by the interaction between the incoming radiation and the hardware of the spacecraft, is exploited to recover the true image intensities within the primary saturated core of the image. This methodology takes advantage of some well defined techniques like cross-correlation and the Expectation Maximization method to invert the direct relation between the diffraction fringes intensities and the true flux intensities. During this talk a complete overview on the structure of the method will be provided, besides some reliability tests obtained by its application against synthetic and real data.
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pollen Image Recognition Based on DGDB-LBP Descriptor
NASA Astrophysics Data System (ADS)
Han, L. P.; Xie, Y. H.
2018-01-01
In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maehana, W; Yokohama National University, Yokohama, kanagawa; Nagao, T
Purpose: For the image guided radiation therapy (IGRT), the shadows caused by the construction of the treatment couch top adversely affect the visual evaluation. Therefore, we developed the new imaging filter in order to remove the shadows. The performance of the new filter was evaluated using the clinical images. Methods: The new filter was composed of the band-pass filter (BPF) weighted by the k factor and the low-pass filter (LPF). In the frequency region, the stop bandwidth were 8.3×10{sup 3} mm{sup −1} on u direction and 11.1×10{sup 3} mm{sup −1} on v direction for the BPF, and the pass bandwidthmore » were 8.3×10{sup 3} mm{sup −1} on u direction and 11.1×10{sup 3} mm{sup −1} on v direction for the LPF. After adding the filter, the shadows from the carbon fiber grid table top (CFGTT, Varian) on the kV-image was removed. To check the filter effect, we compared the clinical images, which are thorax and thoracoabdominal region, with to without the filter. The subjective evaluation tests was performed by adapting a three-point scale (agree, neither agree nor disagree, disagree) about the 15 persons in the department of radiation oncology. Results: We succeeded in removing all shadows of CFGTT using the new filter. This filter is very useful shown by the results of the subjective evaluation having the 23/30 persons agreed to the filtered clinical images. Conclusion: We concluded that the proposed method was useful tool for the IGRT and the new filter leads to improvement of the accuracy of radiation therapy.« less
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
Golden-ratio rotated stack-of-stars acquisition for improved volumetric MRI.
Zhou, Ziwu; Han, Fei; Yan, Lirong; Wang, Danny J J; Hu, Peng
2017-12-01
To develop and evaluate an improved stack-of-stars radial sampling strategy for reducing streaking artifacts. The conventional stack-of-stars sampling strategy collects the same radial angle for every partition (slice) encoding. In an undersampled acquisition, such an aligned acquisition generates coherent aliasing patterns and introduces strong streaking artifacts. We show that by rotating the radial spokes in a golden-angle manner along the partition-encoding direction, the aliasing pattern is modified, resulting in improved image quality for gridding and more advanced reconstruction methods. Computer simulations were performed and phantom as well as in vivo images for three different applications were acquired. Simulation, phantom, and in vivo experiments confirmed that the proposed method was able to generate images with less streaking artifact and sharper structures based on undersampled acquisitions in comparison with the conventional aligned approach at the same acceleration factors. By combining parallel imaging and compressed sensing in the reconstruction, streaking artifacts were mostly removed with improved delineation of fine structures using the proposed strategy. We present a simple method to reduce streaking artifacts and improve image quality in 3D stack-of-stars acquisitions by re-arranging the radial spoke angles in the 3D partition direction, which can be used for rapid volumetric imaging. Magn Reson Med 78:2290-2298, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolski, M., E-mail: marcin.wolski@curtin.edu.au; Podsiadlo, P.; Stachowiak, G. W.
Purpose: To develop directional fractal signature methods for the analysis of trabecular bone (TB) texture in hand radiographs. Problems associated with the small size of hand bones and the orientation of fingers were addressed. Methods: An augmented variance orientation transform (AVOT) and a quadrant rotating grid (QRG) methods were developed. The methods calculate fractal signatures (FSs) in different directions. Unlike other methods they have the search region adjusted according to the size of bone region of interest (ROI) to be analyzed and they produce FSs defined with respect to any chosen reference direction, i.e., they work for arbitrary orientation ofmore » fingers. Five parameters at scales ranging from 2 to 14 pixels (depending on image size and method) were derived from rose plots of Hurst coefficients, i.e., FS in dominating roughness (FS{sub Sta}), vertical (FS{sub V}) and horizontal (FS{sub H}) directions, aspect ratio (StrS), and direction signatures (StdS), respectively. The accuracy in measuring surface roughness and isotropy/anisotropy was evaluated using 3600 isotropic and 800 anisotropic fractal surface images of sizes between 20 × 20 and 64 × 64 pixels. The isotropic surfaces had FDs ranging from 2.1 to 2.9 in steps of 0.1, and the anisotropic surfaces had two dominating directions of 30° and 120°. The methods were used to find differences in hand TB textures between 20 matched pairs of subjects with (cases: approximate Kellgren-Lawrence (KL) grade ≥2) and without (controls: approximate KL grade <2) radiographic hand osteoarthritis (OA). The OA Initiative public database was used and 20 × 20 pixel bone ROIs were selected on 5th distal and middle phalanges. The performance of the AVOT and QRG methods was compared against a variance orientation transform (VOT) method developed earlier [M. Wolski, P. Podsiadlo, and G. W. Stachowiak, “Directional fractal signature analysis of trabecular bone: evaluation of different methods to detect early osteoarthritis in knee radiographs,” Proc. Inst. Mech. Eng., Part H 223, 211–236 (2009)]. Results: The AVOT method correctly quantified the isotropic and anisotropic surfaces for all image sizes and scales. Values of FS{sub Sta} were significantly different (P < 0.05) between the isotropic surfaces. Using the VOT and QRG methods no differences were found at large scales for the isotropic surfaces that are smaller than 64 × 64 and 48 × 48 pixels, respectively, and at some scales for the anisotropic surfaces with size 48 × 48 pixels. Compared to controls, using the AVOT and QRG methods the authors found that OA TB textures were less rough (P < 0.05) in the dominating and horizontal directions (i.e., lower FS{sub Sta} and FS{sub H}), rougher in the vertical direction (i.e., higher FS{sub V}) and less anisotropic (i.e., higher StrS) than controls. No differences were found using the VOT method. Conclusions: The AVOT method is well suited for the analysis of bone texture in hand radiographs and it could be potentially useful for early detection and prediction of hand OA.« less
Longitudinally polarized shear wave optical coherence elastography (Conference Presentation)
NASA Astrophysics Data System (ADS)
Miao, Yusi; Zhu, Jiang; Qi, Li; Qu, Yueqiao; He, Youmin; Gao, Yiwei; Chen, Zhongping
2017-02-01
Shear wave measurement enables quantitative assessment of tissue viscoelasticity. In previous studies, a transverse shear wave was measured using optical coherence elastography (OCE), which gives poor resolution along the force direction because the shear wave propagates perpendicular to the applied force. In this study, for the first time to our knowledge, we introduce an OCE method to detect a longitudinally polarized shear wave that propagates along the force direction. The direction of vibration induced by a piezo transducer (PZT) is parallel to the direction of wave propagation, which is perpendicular to the OCT beam. A Doppler variance method is used to visualize the transverse displacement. Both homogeneous phantoms and a side-by-side two-layer phantom were measured. The elastic moduli from mechanical tests closely matched to the values measured by the OCE system. Furthermore, we developed 3D computational models using finite element analysis to confirm the shear wave propagation in the longitudinal direction. The simulation shows that a longitudinally polarized shear wave is present as a plane wave in the near field of planar source due to diffraction effects. This imaging technique provides a novel method for the assessment of elastic properties along the force direction, which can be especially useful to image a layered tissue.
Method and apparatus for imaging a sample on a device
Trulson, Mark; Stern, David; Fiekowsky, Peter; Rava, Richard; Walton, Ian; Fodor, Stephen P. A.
2001-01-01
A method and apparatus for imaging a sample are provided. An electromagnetic radiation source generates excitation radiation which is sized by excitation optics to a line. The line is directed at a sample resting on a support and excites a plurality of regions on the sample. Collection optics collect response radiation reflected from the sample I and image the reflected radiation. A detector senses the reflected radiation and is positioned to permit discrimination between radiation reflected from a certain focal plane in the sample and certain other planes within the sample.
Cell-free measurements of brightness of fluorescently labeled antibodies
Zhou, Haiying; Tourkakis, George; Shi, Dennis; Kim, David M.; Zhang, Hairong; Du, Tommy; Eades, William C.; Berezin, Mikhail Y.
2017-01-01
Validation of imaging contrast agents, such as fluorescently labeled imaging antibodies, has been recognized as a critical challenge in clinical and preclinical studies. As the number of applications for imaging antibodies grows, these materials are increasingly being subjected to careful scrutiny. Antibody fluorescent brightness is one of the key parameters that is of critical importance. Direct measurements of the brightness with common spectroscopy methods are challenging, because the fluorescent properties of the imaging antibodies are highly sensitive to the methods of conjugation, degree of labeling, and contamination with free dyes. Traditional methods rely on cell-based assays that lack reproducibility and accuracy. In this manuscript, we present a novel and general approach for measuring the brightness using antibody-avid polystyrene beads and flow cytometry. As compared to a cell-based method, the described technique is rapid, quantitative, and highly reproducible. The proposed method requires less than ten microgram of sample and is applicable for optimizing synthetic conjugation procedures, testing commercial imaging antibodies, and performing high-throughput validation of conjugation procedures. PMID:28150730
Quantitative DIC microscopy using an off-axis self-interference approach.
Fu, Dan; Oh, Seungeun; Choi, Wonshik; Yamauchi, Toyohiko; Dorn, August; Yaqoob, Zahid; Dasari, Ramachandra R; Feld, Michael S
2010-07-15
Traditional Normarski differential interference contrast (DIC) microscopy is a very powerful method for imaging nonstained biological samples. However, one of its major limitations is the nonquantitative nature of the imaging. To overcome this problem, we developed a quantitative DIC microscopy method based on off-axis sample self-interference. The digital holography algorithm is applied to obtain quantitative phase gradients in orthogonal directions, which leads to a quantitative phase image through a spiral integration of the phase gradients. This method is practically simple to implement on any standard microscope without stringent requirements on polarization optics. Optical sectioning can be obtained through enlarged illumination NA.
Imaging of surface spin textures on bulk crystals by scanning electron microscopy
NASA Astrophysics Data System (ADS)
Akamine, Hiroshi; Okumura, So; Farjami, Sahar; Murakami, Yasukazu; Nishida, Minoru
2016-11-01
Direct observation of magnetic microstructures is vital for advancing spintronics and other technologies. Here we report a method for imaging surface domain structures on bulk samples by scanning electron microscopy (SEM). Complex magnetic domains, referred to as the maze state in CoPt/FePt alloys, were observed at a spatial resolution of less than 100 nm by using an in-lens annular detector. The method allows for imaging almost all the domain walls in the mazy structure, whereas the visualisation of the domain walls with the classical SEM method was limited. Our method provides a simple way to analyse surface domain structures in the bulk state that can be used in combination with SEM functions such as orientation or composition analysis. Thus, the method extends applications of SEM-based magnetic imaging, and is promising for resolving various problems at the forefront of fields including physics, magnetics, materials science, engineering, and chemistry.
Registration of 3D ultrasound computer tomography and MRI for evaluation of tissue correspondences
NASA Astrophysics Data System (ADS)
Hopp, T.; Dapp, R.; Zapf, M.; Kretzek, E.; Gemmeke, H.; Ruiter, N. V.
2015-03-01
3D Ultrasound Computer Tomography (USCT) is a new imaging method for breast cancer diagnosis. In the current state of development it is essential to correlate USCT with a known imaging modality like MRI to evaluate how different tissue types are depicted. Due to different imaging conditions, e.g. with the breast subject to buoyancy in USCT, a direct correlation is demanding. We present a 3D image registration method to reduce positioning differences and allow direct side-by-side comparison of USCT and MRI volumes. It is based on a two-step approach including a buoyancy simulation with a biomechanical model and free form deformations using cubic B-Splines for a surface refinement. Simulation parameters are optimized patient-specifically in a simulated annealing scheme. The method was evaluated with in-vivo datasets resulting in an average registration error below 5mm. Correlating tissue structures can thereby be located in the same or nearby slices in both modalities and three-dimensional non-linear deformations due to the buoyancy are reduced. Image fusion of MRI volumes and USCT sound speed volumes was performed for intuitive display. By applying the registration to data of our first in-vivo study with the KIT 3D USCT, we could correlate several tissue structures in MRI and USCT images and learn how connective tissue, carcinomas and breast implants observed in the MRI are depicted in the USCT imaging modes.
Image processing enhancement of high-resolution TEM micrographs of nanometer-size metal particles
NASA Technical Reports Server (NTRS)
Artal, P.; Avalos-Borja, M.; Soria, F.; Poppa, H.; Heinemann, K.
1989-01-01
The high-resolution TEM detectability of lattice fringes from metal particles supported on substrates is impeded by the substrate itself. Single value decomposition (SVD) and Fourier filtering (FFT) methods were applied to standard high resolution micrographs to enhance lattice resolution from particles as well as from crystalline substrates. SVD produced good results for one direction of fringes, and it can be implemented as a real-time process. Fourier methods are independent of azimuthal directions and allow separation of particle lattice planes from those pertaining to the substrate, which makes it feasible to detect possible substrate distortions produced by the supported particle. This method, on the other hand, is more elaborate, requires more computer time than SVD and is, therefore, less likely to be used in real-time image processing applications.
A Method of Poisson's Ration Imaging Within a Material Part
NASA Technical Reports Server (NTRS)
Roth, Don J. (Inventor)
1994-01-01
The present invention is directed to a method of displaying the Poisson's ratio image of a material part. In the present invention, longitudinal data is produced using a longitudinal wave transducer and shear wave data is produced using a shear wave transducer. The respective data is then used to calculate the Poisson's ratio for the entire material part. The Poisson's ratio approximations are then used to display the data.
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun
2015-10-19
Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.
NASA Technical Reports Server (NTRS)
Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)
1986-01-01
The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.
Imaging Stem Cells Implanted in Infarcted Myocardium
Zhou, Rong; Acton, Paul D.; Ferrari, Victor A.
2008-01-01
Stem cell–based cellular cardiomyoplasty represents a promising therapy for myocardial infarction. Noninvasive imaging techniques would allow the evaluation of survival, migration, and differentiation status of implanted stem cells in the same subject over time. This review describes methods for cell visualization using several corresponding noninvasive imaging modalities, including magnetic resonance imaging, positron emission tomography, single-photon emission computed tomography, and bioluminescent imaging. Reporter-based cell visualization is compared with direct cell labeling for short- and long-term cell tracking. PMID:17112999
Tsuchiya, Masahiko; Mizutani, Koh; Funai, Yusuke; Nakamoto, Tatsuo
2016-02-01
Ultrasound-guided procedures may be easier to perform when the operator's eye axis, needle puncture site, and ultrasound image display form a straight line in the puncture direction. However, such methods have not been well tested in clinical settings because that arrangement is often impossible due to limited space in the operating room. We developed a wireless remote display system for ultrasound devices using a tablet computer (iPad Mini), which allows easy display of images at nearly any location chosen by the operator. We hypothesized that the in-line layout of ultrasound images provided by this system would allow for secure and quick catheterization of the radial artery. We enrolled first-year medical interns (n = 20) who had no prior experience with ultrasound-guided radial artery catheterization to perform that using a short-axis out-of-plane approach with two different methods. With the conventional method, only the ultrasound machine placed at the side of the head of the patient across the targeted forearm was utilized. With the tablet method, the ultrasound images were displayed on an iPad Mini positioned on the arm in alignment with the operator's eye axis and needle puncture direction. The success rate and time required for catheterization were compared between the two methods. Success rate was significantly higher (100 vs. 70 %, P = 0.02) and catheterization time significantly shorter (28.5 ± 7.5 vs. 68.2 ± 14.3 s, P < 0.001) with the tablet method as compared to the conventional method. An ergonomic straight arrangement of the image display is crucial for successful and quick completion of ultrasound-guided arterial catheterization. The present remote display system is a practical method for providing such an arrangement.
Wavefront control methods for high-contrast integral field spectroscopy
NASA Astrophysics Data System (ADS)
Groff, Tyler D.; Mejia Prada, Camilo; Cady, Eric; Rizzo, Maxime J.; Mandell, Avi; Gong, Qian; McElwain, Michael; Zimmerman, Neil; Saxena, Prabal; Guyon, Olivier
2017-09-01
Direct Imaging of exoplanets using a coronagraph has become a major field of research both on the ground and in space. Key to the science of direct imaging is the spectroscopic capabilities of the instrument, our ability to fit spectra, and understanding the composition of the observed planets. Direct imaging instruments generally use an integral field spectrograph (IFS), which encodes the spectrum into a two-dimensional image on the detector. This results in more efficient detection and characterization of targets, and the spectral information is critical to achieving detection limits below the speckle floor of the imager. The most mature application of these techniques is at more modest contrast ratios on ground-based telescopes, achieving approximately 5-6 orders of magnitude suppression. In space, where we are attempting to detect Earth-analogs, the contrast requirements are more severe and the IFS must be incorporated into the wavefront control loop to reach 1e-10 detection limits required for Earth-like planet detection. We present the objectives and application of IFS imagery for both a speckle control loop and post-processing of images. Results, tested methodologies, and the future work using the Coronagraphic High Angular Resolution Imaging Spectrograph (CHARIS) and the Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) at the JPL High Contrast Imaging Testbed are presented.
SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING
Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin
2018-01-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594
A new axial smoothing method based on elastic mapping
NASA Astrophysics Data System (ADS)
Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.
Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-07-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.
Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.
Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce
2018-06-15
A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.
An improved artifact removal in exposure fusion with local linear constraints
NASA Astrophysics Data System (ADS)
Zhang, Hai; Yu, Mali
2018-04-01
In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.
Identification of depth information with stereoscopic mammography using different display methods
NASA Astrophysics Data System (ADS)
Morikawa, Takamitsu; Kodera, Yoshie
2013-03-01
Stereoscopy in radiography was widely used in the late 80's because it could be used for capturing complex structures in the human body, thus proving beneficial for diagnosis and screening. When radiologists observed the images stereoscopically, radiologists usually needed the training of their eyes in order to perceive the stereoscopic effect. However, with the development of three-dimensional (3D) monitors and their use in the medical field, only a visual inspection is no longer required in the medical field. The question then arises as to whether there is any difference in recognizing depth information when using conventional methods and that when using a 3D monitor. We constructed a phantom and evaluated the difference in capacity to identify the depth information between the two methods. The phantom consists of acryl steps and 3mm diameter acryl pillars on the top and bottom of each step. Seven observers viewed these images stereoscopically using the two display methods and were asked to judge the direction of the pillar that was on the top. We compared these judged direction with the direction of the real pillar arranged on the top, and calculated the percentage of correct answerers (PCA). The results showed that PCA obtained using the 3D monitor method was higher PCA by about 5% than that obtained using the naked-eye method. This indicated that people could view images stereoscopically more precisely using the 3D monitor method than when using with conventional methods, like the crossed or parallel eye viewing. We were able to estimate the difference in capacity to identify the depth information between the two display methods.
Invasive species change detection using artificial neural networks and CASI hyperspectral imagery
USDA-ARS?s Scientific Manuscript database
For monitoring and controlling the extent and intensity of an invasive species, a direct multi-date image classification method was applied in invasive species (saltcedar) change detection in the study area of Lovelock, Nevada. With multi-date Compact Airborne Spectrographic Imager (CASI) hyperspec...
Volumetric calibration of a plenoptic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Volumetric calibration of a plenoptic camera
Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...
2018-02-01
Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.
Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn
Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less
Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering
Shin, Kwang Yong; Park, Young Ho; Nguyen, Dat Tien; Park, Kang Ryoung
2014-01-01
Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods. PMID:24549251
NASA Astrophysics Data System (ADS)
Carles, Guillem; Muyo, Gonzalo; van Hemert, Jano; Harvey, Andrew R.
2017-11-01
We demonstrate a multimode detection system in a scanning laser ophthalmoscope (SLO) that enables simultaneous operation in confocal, indirect, and direct modes to permit an agile trade between image contrast and optical sensitivity across the retinal field of view to optimize the overall imaging performance, enabling increased contrast in very wide-field operation. We demonstrate the method on a wide-field SLO employing a hybrid pinhole at its image plane, to yield a twofold increase in vasculature contrast in the central retina compared to its conventional direct mode while retaining high-quality imaging across a wide field of the retina, of up to 200 deg and 20 μm on-axis resolution.
Characterizing Cool Giant Planets in Reflected Light
NASA Technical Reports Server (NTRS)
Marley, Mark
2016-01-01
While the James Webb Space Telescope will detect and characterize extrasolar planets by transit and direct imaging, a new generation of telescopes will be required to detect and characterize extrasolar planets by reflected light imaging. NASA's WFIRST space telescope, now in development, will image dozens of cool giant planets at optical wavelengths and will obtain spectra for several of the best and brightest targets. This mission will pave the way for the detection and characterization of terrestrial planets by the planned LUVOIR or HabEx space telescopes. In my presentation I will discuss the challenges that arise in the interpretation of direct imaging data and present the results of our group's effort to develop methods for maximizing the science yield from these planned missions.
NASA Astrophysics Data System (ADS)
Rice, Ashley; Oprisan, Ana; Oprisan, Sorinel; Rice-Oprisan College of Charleston Team
Nanoparticles of iron oxide have a high surface area and can be controlled by an external magnetic field. Since they have a fast response to the applied magnetic field, these systems have been used for numerous in vivo applications, such as MRI contrast enhancement, tissue repair, immunoassay, detoxification of biological fluids, hyperthermia, drug delivery, and cell separation. We performed three direct imaging experiments in order to investigate the concentration-driven fluctuations using magnetic nanoparticles in the absence and in the presence of magnetic field. Our direct imaging experimental setup involved a glass cell filled with magnetic nanocolloidal suspension and water with the concentration gradient oriented against the gravitational field and a superluminescent diode (SLD) as the light source. Nonequilibrium concentration-driven fluctuations were recorded using a direct imaging technique. We used a dynamic structure factor algorithm for image processing in order to compute the structure factor and to find the power law exponents. We saw evidence of large concentration fluctuations and permanent magnetism. Further research will use the correlation time to approximate the diffusion coefficient for the free diffusion experiment. Funded by College of Charleston Department of Undergraduate Research and Creative Activities SURF grant.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
Direct imaging of small scatterers using reduced time dependent data
NASA Astrophysics Data System (ADS)
Cakoni, Fioralba; Rezac, Jacob D.
2017-06-01
We introduce qualitative methods for locating small objects using time dependent acoustic near field waves. These methods have reduced data collection requirements compared to typical qualitative imaging techniques. In particular, we only collect scattered field data in a small region surrounding the location from which an incident field was transmitted. The new methods are partially theoretically justified and numerical simulations demonstrate their efficacy. We show that these reduced data techniques give comparable results to methods which require full multistatic data and that these time dependent methods require less scattered field data than their time harmonic analogs.
Breast EIT using a new projected image reconstruction method with multi-frequency measurements.
Lee, Eunjung; Ts, Munkh-Erdene; Seo, Jin Keun; Woo, Eung Je
2012-05-01
We propose a new method to produce admittivity images of the breast for the diagnosis of breast cancer using electrical impedance tomography(EIT). Considering the anatomical structure of the breast, we designed an electrode configuration where current-injection and voltage-sensing electrodes are separated in such a way that internal current pathways are approximately along the tangential direction of an array of voltage-sensing electrodes. Unlike conventional EIT imaging methods where the number of injected currents is maximized to increase the total amount of measured data, current is injected only twice between two pairs of current-injection electrodes attached along the circumferential side of the breast. For each current injection, the induced voltages are measured from the front surface of the breast using as many voltage-sensing electrodes as possible. Although this electrode configurational lows us to measure induced voltages only on the front surface of the breast,they are more sensitive to an anomaly inside the breast since such an injected current tends to produce a more uniform internal current density distribution. Furthermore, the sensitivity of a measured boundary voltage between two equipotential lines on the front surface of the breast is improved since those equipotential lines are perpendicular to the primary direction of internal current streamlines. One should note that this novel data collection method is different from those of other frontal plane techniques such as the x-ray projection and T-scan imaging methods because we do not get any data on the plane that is perpendicular to the current flow. To reconstruct admittivity images using two measured voltage data sets, a new projected image reconstruction algorithm is developed. Numerical simulations demonstrate the frequency-difference EIT imaging of the breast. The results show that the new method is promising to accurately detect and localize small anomalies inside the breast.
MO-A-BRD-05: Evaluation of Composed Lung Ventilation with 4DCT and Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, K; Bayouth, J; Reinhardt, J
Purpose: Regional pulmonary function can be derived using fourdimensional computed tomography (4DCT) combined with deformable image registration. However, only peak inhale and exhale phases have been used thus far while the lung ventilation during intermediate phases is not considered. In our previous work, we have investigated the spatiotemporal heterogeneity of lung ventilation and its dependence on respiration effort. In this study, composed ventilation is introduced using all inspiration phases and compared to direct ventilation. Both methods are evaluated against Xe-CT derived ventilation. Methods: Using an in-house tissue volume preserving deformable image registration, unlike the direct ventilation method, which computes frommore » end expiration to end inspiration, Jacobian ventilation maps were computed from one inhale phase to the next and then composed from all inspiration steps. The two methods were compared in both patients prior to RT and mechanically ventilated sheep subjects. In addition, they wereassessed for the correlation with Xe-CT derived ventilation in sheep subjects. Annotated lung landmarks were used to evaluate the accuracy of original and composed deformation field. Results: After registration, the landmark distance for composed deformation field was always higher than that for direct deformation field (0IN to 100IN average in human: 1.03 vs 1.53, p=0.001, and in sheep: 0.80 vs0.94, p=0.009), and both increased with longer phase interval. Direct and composed ventilation maps were similar in both sheep (gamma pass rate 87.6) and human subjects (gamma pass rate 71.9),and showed consistent pattern from ventral to dorsal when compared to Xe-CT derived ventilation. Correlation coefficient between Xe-CT and composed ventilation was slightly better than the direct method but not significant (average 0.89 vs 0.85, p=0.135). Conclusion: More strict breathing control in sheep subjects may explain higher similarity between direct and composed ventilation. When compared to Xe-CT ventilation, no significant difference was found for the composed method. NIH Grant: R01 CA166703.« less
Single image super resolution algorithm based on edge interpolation in NSCT domain
NASA Astrophysics Data System (ADS)
Zhang, Mengqun; Zhang, Wei; He, Xinyu
2017-11-01
In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.
Carboranylporphyrins and uses thereof
Miura, Michiko; Renner, Mark W
2012-10-16
The present invention is directed to low toxicity boronated compounds and methods for their use in the treatment, visualization, and diagnosis of tumors. More specifically, the present invention is directed to low toxicity carborane-containing porphyrin compounds with halide, amine, or nitro groups and methods for their use particularly in boron neutron capture therapy (BNCT), X-ray radiation therapy (XRT), and photodynamic therapy (PDT) for the treatment of tumors of the brain, head and neck, and surrounding tissue. The invention is also directed to using these carborane-containing porphyrin compounds in methods of tumor imaging and/or diagnosis such as MRI, SPECT, or PET.
Carbonylporphyrins and uses thereof
Miura, Michiko; Renner, Mark W
2014-03-18
The present invention is directed to low toxicity boronated compounds and methods for their use in the treatment, visualization, and diagnosis of tumors. More specifically, the present invention is directed to low toxicity carborane-containing porphyrin compounds with halide, amine, or nitro groups and methods for their use particularly in boron neutron capture therapy (BNCT), X-ray radiation therapy (XRT), and photodynamic therapy (PDT) for the treatment of tumors of the brain, head and neck, and surrounding tissue. The invention is also directed to using these carborane-containing porphyrin compounds in methods of tumor imaging and/or diagnosis such as MRI, SPECT, or PET.
Symmetric and asymmetric halogen-containing metallocarboranylporphyrins and uses thereof
Miura, Michiko; Wu, Haitao
2013-05-21
The present invention is directed to low toxicity boronated compounds and methods for their use in the treatment, visualization, and diagnosis of tumors. More specifically, the present invention is directed to low toxicity halogenated, carborane-containing 5,10,15,20-tetraphenylporphyrin compounds and methods for their use particularly in boron neutron capture therapy (BNCT) and photodynamic therapy (PDT) for the treatment of tumors of the brain, head and neck, and surrounding tissue. The invention is also directed to using these halogenated, carborane-containing tetraphenylporphyrin compounds in methods of tumor imaging and/or diagnosis such as MRI, SPECT, or PET.
Gregl, A
1991-06-01
Indication for direct lymphography during the past forty years shows a downward tendency, mainly because of new alternative modern imaging methods. Nevertheless, in agreement with the actual literature it can be shown by own investigations with 8000 patients from 1964 to 1989 that one cannot give up lymphography totally. On principle lymphography is still carried out in case of testicular tumors, malignant lymphomas, unclear fever, lymphatic vessel injury and facultative in peripheric lymph edemas.
Wavefront measurement using computational adaptive optics.
South, Fredrick A; Liu, Yuan-Zhi; Bower, Andrew J; Xu, Yang; Carney, P Scott; Boppart, Stephen A
2018-03-01
In many optical imaging applications, it is necessary to correct for aberrations to obtain high quality images. Optical coherence tomography (OCT) provides access to the amplitude and phase of the backscattered optical field for three-dimensional (3D) imaging samples. Computational adaptive optics (CAO) modifies the phase of the OCT data in the spatial frequency domain to correct optical aberrations without using a deformable mirror, as is commonly done in hardware-based adaptive optics (AO). This provides improvement of image quality throughout the 3D volume, enabling imaging across greater depth ranges and in highly aberrated samples. However, the CAO aberration correction has a complicated relation to the imaging pupil and is not a direct measurement of the pupil aberrations. Here we present new methods for recovering the wavefront aberrations directly from the OCT data without the use of hardware adaptive optics. This enables both computational measurement and correction of optical aberrations.
Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging
Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.
2017-01-01
Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737
Larkin, Kieran G; Fletcher, Peter A
2014-03-01
X-ray Talbot moiré interferometers can now simultaneously generate two differential phase images of a specimen. The conventional approach to integrating differential phase is unstable and often leads to images with loss of visible detail. We propose a new reconstruction method based on the inverse Riesz transform. The Riesz approach is stable and the final image retains visibility of high resolution detail without directional bias. The outline Riesz theory is developed and an experimentally acquired X-ray differential phase data set is presented for qualitative visual appraisal. The inverse Riesz phase image is compared with two alternatives: the integrated (quantitative) phase and the modulus of the gradient of the phase. The inverse Riesz transform has the computational advantages of a unitary linear operator, and is implemented directly as a complex multiplication in the Fourier domain also known as the spiral phase transform.
Larkin, Kieran G.; Fletcher, Peter A.
2014-01-01
X-ray Talbot moiré interferometers can now simultaneously generate two differential phase images of a specimen. The conventional approach to integrating differential phase is unstable and often leads to images with loss of visible detail. We propose a new reconstruction method based on the inverse Riesz transform. The Riesz approach is stable and the final image retains visibility of high resolution detail without directional bias. The outline Riesz theory is developed and an experimentally acquired X-ray differential phase data set is presented for qualitative visual appraisal. The inverse Riesz phase image is compared with two alternatives: the integrated (quantitative) phase and the modulus of the gradient of the phase. The inverse Riesz transform has the computational advantages of a unitary linear operator, and is implemented directly as a complex multiplication in the Fourier domain also known as the spiral phase transform. PMID:24688823
Magnetic resonance imaging of the subthalamic nucleus for deep brain stimulation.
Chandran, Arjun S; Bynevelt, Michael; Lind, Christopher R P
2016-01-01
The subthalamic nucleus (STN) is one of the most important stereotactic targets in neurosurgery, and its accurate imaging is crucial. With improving MRI sequences there is impetus for direct targeting of the STN. High-quality, distortion-free images are paramount. Image reconstruction techniques appear to show the greatest promise in balancing the issue of geometrical distortion and STN edge detection. Existing spin echo- and susceptibility-based MRI sequences are compared with new image reconstruction methods. Quantitative susceptibility mapping is the most promising technique for stereotactic imaging of the STN.
Sonoelastography in the musculoskeletal system: Current role and future directions.
Winn, Naomi; Lalam, Radhesh; Cassar-Pullicino, Victor
2016-11-28
Ultrasound is an essential modality within musculoskeletal imaging, with the recent addition of elastography. The elastic properties of tissues are different from the acoustic impedance used to create B mode imaging and the flow properties used within Doppler imaging, hence elastography provides a different form of tissue assessment. The current role of ultrasound elastography in the musculoskeletal system will be reviewed, in particular with reference to muscles, tendons, ligaments, joints and soft tissue tumours. The different ultrasound elastography methods currently available will be described, in particular strain elastography and shear wave elastography. Future directions of ultrasound elastography in the musculoskeletal system will also be discussed.
A hyperspectral image optimizing method based on sub-pixel MTF analysis
NASA Astrophysics Data System (ADS)
Wang, Yun; Li, Kai; Wang, Jinqiang; Zhu, Yajie
2015-04-01
Hyperspectral imaging is used to collect tens or hundreds of images continuously divided across electromagnetic spectrum so that the details under different wavelengths could be represented. A popular hyperspectral imaging methods uses a tunable optical band-pass filter settled in front of the focal plane to acquire images of different wavelengths. In order to alleviate the influence of chromatic aberration in some segments in a hyperspectral series, in this paper, a hyperspectral optimizing method uses sub-pixel MTF to evaluate image blurring quality was provided. This method acquired the edge feature in the target window by means of the line spread function (LSF) to calculate the reliable position of the edge feature, then the evaluation grid in each line was interpolated by the real pixel value based on its relative position to the optimal edge and the sub-pixel MTF was used to analyze the image in frequency domain, by which MTF calculation dimension was increased. The sub-pixel MTF evaluation was reliable, since no image rotation and pixel value estimation was needed, and no artificial information was introduced. With theoretical analysis, the method proposed in this paper is reliable and efficient when evaluation the common images with edges of small tilt angle in real scene. It also provided a direction for the following hyperspectral image blurring evaluation and the real-time focal plane adjustment in real time in related imaging system.
TH-EF-207A-05: Feasibility of Applying SMEIR Method On Small Animal 4D Cone Beam CT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Y; Zhang, Y; Shao, Y
Purpose: Small animal cone beam CT imaging has been widely used in preclinical research. Due to the higher respiratory rate and heat beats of small animals, motion blurring is inevitable and needs to be corrected in the reconstruction. Simultaneous motion estimation and image reconstruction (SMEIR) method, which uses projection images of all phases, proved to be effective in motion model estimation and able to reconstruct motion-compensated images. We demonstrate the application of SMEIR for small animal 4D cone beam CT imaging by computer simulations on a digital rat model. Methods: The small animal CBCT imaging system was simulated with themore » source-to-detector distance of 300 mm and the source-to-object distance of 200 mm. A sequence of rat phantom were generated with 0.4 mm{sup 3} voxel size. The respiratory cycle was taken as 1.0 second and the motions were simulated with a diaphragm motion of 2.4mm and an anterior-posterior expansion of 1.6 mm. The projection images were calculated using a ray-tracing method, and 4D-CBCT were reconstructed using SMEIR and FDK methods. The SMEIR method iterates over two alternating steps: 1) motion-compensated iterative image reconstruction by using projections from all respiration phases and 2) motion model estimation from projections directly through a 2D-3D deformable registration of the image obtained in the first step to projection images of other phases. Results: The images reconstructed using SMEIR method reproduced the features in the original phantom. Projections from the same phase were also reconstructed using FDK method. Compared with the FDK results, the images from SMEIR method substantially improve the image quality with minimum artifacts. Conclusion: We demonstrate that it is viable to apply SMEIR method to reconstruct small animal 4D-CBCT images.« less
High-Resolution Remote Sensing Image Building Extraction Based on Markov Model
NASA Astrophysics Data System (ADS)
Zhao, W.; Yan, L.; Chang, Y.; Gong, L.
2018-04-01
With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.
Non-overlapped P- and S-wave Poynting vectors and their solution by the grid method
NASA Astrophysics Data System (ADS)
Lu, Yongming; Liu, Qiancheng
2018-06-01
The Poynting vector represents the local directional energy flux density of seismic waves in geophysics. It is widely used in elastic reverse time migration to analyze source illumination, suppress low-wavenumber noise, correct for image polarity and extract angle-domain common-image gathers. However, the P- and S-waves are mixed together during wavefield propagation so that the P and S energy fluxes are not clean everywhere, especially at the overlapped points. In this paper, we use a modified elastic-wave equation in which the P and S vector wavefields are naturally separated. Then, we develop an efficient method to evaluate the separable P and S Poynting vectors, respectively, based on the view that the group velocity and phase velocity have the same direction in isotropic elastic media. We furthermore formulate our method using an unstructured mesh-based modeling method named the grid method. Finally, we verify our method using two numerical examples.
Methods for magnetic resonance analysis using magic angle technique
Hu, Jian Zhi [Richland, WA; Wind, Robert A [Kennewick, WA; Minard, Kevin R [Kennewick, WA; Majors, Paul D [Kennewick, WA
2011-11-22
Methods of performing a magnetic resonance analysis of a biological object are disclosed that include placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. In particular embodiments the method includes pulsing the radio frequency to provide at least two of a spatially selective read pulse, a spatially selective phase pulse, and a spatially selective storage pulse. Further disclosed methods provide pulse sequences that provide extended imaging capabilities, such as chemical shift imaging or multiple-voxel data acquisition.
High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells
NASA Astrophysics Data System (ADS)
Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey
2018-05-01
The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.
NASA Astrophysics Data System (ADS)
Hamers, M. F.; Pennock, G. M.; Drury, M. R.
2017-04-01
The study of deformation features has been of great importance to determine deformation mechanisms in quartz. Relevant microstructures in both growth and deformation processes include dislocations, subgrains, subgrain boundaries, Brazil and Dauphiné twins and planar deformation features (PDFs). Dislocations and twin boundaries are most commonly imaged using a transmission electron microscope (TEM), because these cannot directly be observed using light microscopy, in contrast to PDFs. Here, we show that red-filtered cathodoluminescence imaging in a scanning electron microscope (SEM) is a useful method to visualise subgrain boundaries, Brazil and Dauphiné twin boundaries. Because standard petrographic thin sections can be studied in the SEM, the observed structures can be directly and easily correlated to light microscopy studies. In contrast to TEM preparation methods, SEM techniques are non-destructive to the area of interest on a petrographic thin section.
Yoon, Yeomin; Noh, Suwoo; Jeong, Jiseong; Park, Kyihwan
2018-05-01
The topology image is constructed from the 2D matrix (XY directions) of heights Z captured from the force-feedback loop controller. For small height variations, nonlinear effects such as hysteresis or creep of the PZT-driven Z nano scanner can be neglected and its calibration is quite straightforward. For large height variations, the linear approximation of the PZT-driven Z nano scanner fail and nonlinear behaviors must be considered because this would cause inaccuracies in the measurement image. In order to avoid such inaccuracies, an additional strain gauge sensor is used to directly measure displacement of the PZT-driven Z nano scanner. However, this approach also has a disadvantage in its relatively low precision. In order to obtain high precision data with good linearity, we propose a method of overcoming the low precision problem of the strain gauge while its feature of good linearity is maintained. We expect that the topology image obtained from the strain gauge sensor showing significant noise at high frequencies. On the other hand, the topology image obtained from the controller output showing low noise at high frequencies. If the low and high frequency signals are separable from both topology images, the image can be constructed so that it is represented with high accuracy and low noise. In order to separate the low frequencies from high frequencies, a 2D Haar wavelet transform is used. Our proposed method use the 2D wavelet transform for obtaining good linearity from strain gauge sensor and good precision from controller output. The advantages of the proposed method are experimentally validated by using topology images. Copyright © 2018 Elsevier B.V. All rights reserved.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
Descriptive and Computer Aided Drawing Perspective on an Unfolded Polyhedral Projection Surface
NASA Astrophysics Data System (ADS)
Dzwierzynska, Jolanta
2017-10-01
The aim of the herby study is to develop a method of direct and practical mapping of perspective on an unfolded prism polyhedral projection surface. The considered perspective representation is a rectilinear central projection onto a surface composed of several flat elements. In the paper two descriptive methods of drawing perspective are presented: direct and indirect. The graphical mapping of the effects of the representation is realized directly on the unfolded flat projection surface. That is due to the projective and graphical connection between points displayed on the polyhedral background and their counterparts received on the unfolded flat surface. For a significant improvement of the construction of line, analytical algorithms are formulated. They draw a perspective image of a segment of line passing through two different points determined by their coordinates in a spatial coordinate system of axis x, y, z. Compared to other perspective construction methods that use information about points, for computer vision and the computer aided design, our algorithms utilize data about lines, which are applied very often in architectural forms. Possibility of drawing lines in the considered perspective enables drawing an edge perspective image of an architectural object. The application of the changeable base elements of perspective as a horizon height and a station point location enable drawing perspective image from different viewing positions. The analytical algorithms for drawing perspective images are formulated in Mathcad software, however, they can be implemented in the majority of computer graphical packages, which can make drawing perspective more efficient and easier. The representation presented in the paper and the way of its direct mapping on the flat unfolded projection surface can find application in presentation of architectural space in advertisement and art.
Simulation of a method to directly image exoplanets around multiple stars systems
NASA Astrophysics Data System (ADS)
Thomas, Sandrine J.; Bendek, Eduardo; Belikov, Ruslan
2014-08-01
Direct imaging of extra-solar planets has now become a reality, especially with the deployment and commissioning of the first generation of specialized ground-based instruments such as the GPI, SPHERE, P1640 and SCExAO. These systems will allow detection of planets 107 times fainter than their host star. For space- based missions, such as EXCEDE, EXO-C, EXO-S, WFIRST/AFTA, different teams have shown in laboratories contrasts reaching 10-10 within a few diffraction limits from the star using a combination of a coronagraph to suppress light coming from the host star and a wavefront control system. These demonstrations use a de- formable mirror (DM) to remove residual starlight (speckles) created by the imperfections of telescope. However, all these current and future systems focus on detecting faint planets around a single host star or unresolved bi- naries/multiples, while several targets or planet candidates are located around nearby binary stars such as our neighbor star Alpha Centauri. Until now, it has been thought that removing the light of a companion star is impossible with current technology, excluding binary star systems from target lists of direct imaging missions. Direct imaging around binaries/multiple systems at a level of contrast allowing Earth-like planet detection is challenging because the region of interest, where a dark zone is essential, is contaminated by the light coming from the hosts star companion. We propose a method to simultaneously correct aberrations and diffraction of light coming from the target star as well as its companion star in order to reveal planets orbiting the target star. This method works even if the companion star is outside the control region of the DM (beyond its half-Nyquist frequency), by taking advantage of aliasing effects.
Fourier spatial frequency analysis for image classification: training the training set
NASA Astrophysics Data System (ADS)
Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart
2016-04-01
The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
Compressibility-aware media retargeting with structure preserving.
Wang, Shu-Fan; Lai, Shang-Hong
2011-03-01
A number of algorithms have been proposed for intelligent image/video retargeting with image content retained as much as possible. However, they usually suffer from some artifacts in the results, such as ridge or structure twist. In this paper, we present a structure-preserving media retargeting technique that preserves the content and image structure as best as possible. Different from the previous pixel or grid based methods, we estimate the image content saliency from the structure of the content. A block structure energy is introduced with a top-down strategy to constrain the image structure inside to deform uniformly in either x or y direction. However, the flexibilities for retargeting are quite different for different images. To cope with this problem, we propose a compressibility assessment scheme for media retargeting by combining the entropies of image gradient magnitude and orientation distributions. Thus, the resized media is produced to preserve the image content and structure as best as possible. Our experiments demonstrate that the proposed method provides resized images/videos with better preservation of content and structure than those by the previous methods.
NASA Astrophysics Data System (ADS)
Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.
2017-03-01
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.
Haring, Martijn T; Liv, Nalan; Zonnevylle, A Christiaan; Narvaez, Angela C; Voortman, Lenard M; Kruit, Pieter; Hoogenboom, Jacob P
2017-03-02
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.
Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.
2017-01-01
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673
Dauguet, Julien; Bock, Davi; Reid, R Clay; Warfield, Simon K
2007-01-01
3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
NASA Astrophysics Data System (ADS)
Heo, D.; Jeon, S.; Kim, J.-S.; Kim, R. K.; Cha, B. K.; Moon, B. J.; Yoon, J.
2013-02-01
We developed a novel direct X-ray detector using photoinduced discharge (PID) readout for digital radiography. The pixel resolution is 512 × 512 with 200 μm pixel and the overall active dimensions of the X-ray imaging panel is 10.24 cm × 10.24 cm. The detector consists of an X-ray absorption layer of amorphous selenium, a charge accumulation layer of metal, and a PID readout layer of amorphous silicon. In particular, the charge accumulation is pixelated because image charges generated by X-ray should be stored pixel by pixel. Here the image charges, or holes, are recombined with electrons generated by the PID method. We used a 405 nm laser diode and cylindrical lens to make a line beam source with a width of 50 μm for PID readout, which generates charges for each pixel lines during the scan. We obtained spatial frequencies of about 1.0 lp/mm for the X-direction (lateral direction) and 0.9 lp/mm for the Y-direction (scanning direction) at 50% modulation transfer function.
A simplified focusing and astigmatism correction method for a scanning electron microscope
NASA Astrophysics Data System (ADS)
Lu, Yihua; Zhang, Xianmin; Li, Hai
2018-01-01
Defocus and astigmatism can lead to blurred images and poor resolution. This paper presents a simplified method for focusing and astigmatism correction of a scanning electron microscope (SEM). The method consists of two steps. In the first step, the fast Fourier transform (FFT) of the SEM image is performed and the FFT is subsequently processed with a threshold to achieve a suitable result. In the second step, the threshold FFT is used for ellipse fitting to determine the presence of defocus and astigmatism. The proposed method clearly provides the relationships between the defocus, the astigmatism and the direction of stretching of the FFT, and it can determine the astigmatism in a single image. Experimental studies are conducted to demonstrate the validity of the proposed method.
Alignment method for solar collector arrays
Driver, Jr., Richard B
2012-10-23
The present invention is directed to an improved method for establishing camera fixture location for aligning mirrors on a solar collector array (SCA) comprising multiple mirror modules. The method aligns the mirrors on a module by comparing the location of the receiver image in photographs with the predicted theoretical receiver image location. To accurately align an entire SCA, a common reference is used for all of the individual module images within the SCA. The improved method can use relative pixel location information in digital photographs along with alignment fixture inclinometer data to calculate relative locations of the fixture between modules. The absolute locations are determined by minimizing alignment asymmetry for the SCA. The method inherently aligns all of the mirrors in an SCA to the receiver, even with receiver position and module-to-module alignment errors.
Duan, J; Shen, S; Popple, R; Wu, X; Cardan, R; Brezovich, I
2012-06-01
To assess the trigger delay in respiratory triggered real-time imaging and its impact on image guided radiotherapy (IGRT) with Varian TrueBeam System. A sinusoidal motion phantom with 2cm motion amplitude was used. The trigger delay was determined directly with video image, and indirectly by the distance between expected and actual triggering phantom positions. For the direct method, a fluorescent screen was placed on the phantom to visualize the x-ray. The motion of the screen was recorded at 60 frames/second. The number of frames between the time when the phantom reached expected triggering position and the time when the screen was illuminated by the x-ray was used to determine the trigger delay. In the indirect method, triggered kV x-ray images were acquired in real-time during 'treatment' with triggers set at 25% and 75% respiratory phases where the phantom moved at the maximum speed. 39-40 triggered images were acquired continuously in each series. The distance between the expected and actual triggering points, d, was measured on the images to determine the delay time t by d=Asin(wt), where w=2π/T, T=period and A=amplitude. Motion periods of 2s and 4s were used in the measurement. The trigger delay time determined with direct video imaging was 125ms (7.5 video frames). The average distance between the expected and actual triggering positions determined by the indirect method was 3.93±0.74mm for T=4s and 7.02±1.25mm for T=2s, yielding mean trigger delay times of 126±24ms and 120±22ms, respectively. Although the mean over-travel distance is significant at 25% and 75% phases, clinically, the target over-travel resulted from the trigger delay at the end of expiration (50% phase) is negligibly small(< 0.5mm). The trigger delay in respiration-triggered imaging is in the range of 120-126ms. This delay has negligible clinical effect on gated IGRT. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C; Han, M; Baek, J
Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photonsmore » per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201-14-1002) supervised by the NIPA (National IT Industry Promotion Agency). Authors declares that s/he has no conflict of Interest in relation to the work in this abstract.« less
Automatic insertion of simulated microcalcification clusters in a software breast phantom
NASA Astrophysics Data System (ADS)
Shankla, Varsha; Pokrajac, David D.; Weinstein, Susan P.; DeLeo, Michael; Tuite, Catherine; Roth, Robyn; Conant, Emily F.; Maidment, Andrew D.; Bakic, Predrag R.
2014-03-01
An automated method has been developed to insert realistic clusters of simulated microcalcifications (MCs) into computer models of breast anatomy. This algorithm has been developed as part of a virtual clinical trial (VCT) software pipeline, which includes the simulation of breast anatomy, mechanical compression, image acquisition, image processing, display and interpretation. An automated insertion method has value in VCTs involving large numbers of images. The insertion method was designed to support various insertion placement strategies, governed by probability distribution functions (pdf). The pdf can be predicated on histological or biological models of tumor growth, or estimated from the locations of actual calcification clusters. To validate the automated insertion method, a 2-AFC observer study was designed to compare two placement strategies, undirected and directed. The undirected strategy could place a MC cluster anywhere within the phantom volume. The directed strategy placed MC clusters within fibroglandular tissue on the assumption that calcifications originate from epithelial breast tissue. Three radiologists were asked to select between two simulated phantom images, one from each placement strategy. Furthermore, questions were posed to probe the rationale behind the observer's selection. The radiologists found the resulting cluster placement to be realistic in 92% of cases, validating the automated insertion method. There was a significant preference for the cluster to be positioned on a background of adipose or mixed adipose/fibroglandular tissues. Based upon these results, this automated lesion placement method will be included in our VCT simulation pipeline.
MO-FG-209-05: Towards a Feature-Based Anthropomorphic Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avanaki, A.
2016-06-15
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graff, C.
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
Detection and correction for EPID and gantry sag during arc delivery using cine EPID imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowshanfarzad, Pejman; Sabet, Mahsheed; O'Connor, Daryl J.
2012-02-15
Purpose: Electronic portal imaging devices (EPIDs) have been studied and used for pretreatment and in-vivo dosimetry applications for many years. The application of EPIDs for dosimetry in arc treatments requires accurate characterization of the mechanical sag of the EPID and gantry during rotation. Several studies have investigated the effects of gravity on the sag of these systems but each have limitations. In this study, an easy experiment setup and accurate algorithm have been introduced to characterize and correct for the effect of EPID and gantry sag during arc delivery. Methods: Three metallic ball bearings were used as markers in themore » beam: two of them fixed to the gantry head and the third positioned at the isocenter. EPID images were acquired during a 360 deg. gantry rotation in cine imaging mode. The markers were tracked in EPID images and a robust in-house developed MATLAB code was used to analyse the images and find the EPID sag in three directions as well as the EPID + gantry sag by comparison to the reference gantry zero image. The algorithm results were then tested against independent methods. The method was applied to compare the effect in clockwise and counter clockwise gantry rotations and different source-to-detector distances (SDDs). The results were monitored for one linear accelerator over a course of 15 months and six other linear-accelerators from two treatment centers were also investigated using this method. The generalized shift patterns were derived from the data and used in an image registration algorithm to correct for the effect of the mechanical sag in the system. The Gamma evaluation (3%, 3 mm) technique was used to investigate the improvement in alignment of cine EPID images of a fixed field, by comparing both individual images and the sum of images in a series with the reference gantry zero image. Results: The mechanical sag during gantry rotation was dependent on the gantry angle and was larger in the in-plane direction, although the patterns were not identical for various linear-accelerators. The reproducibility of measurements was within 0.2 mm over a period of 15 months. The direction of gantry rotation and SDD did not affect the results by more than 0.3 mm. Results of independent tests agreed with the algorithm within the accuracy of the measurement tools. When comparing summed images, the percentage of points with Gamma index <1 increased from 85.4% to 94.1% after correcting for the EPID sag, and to 99.3% after correction for gantry + EPID sag. Conclusions: The measurement method and algorithms introduced in this study use cine-images, are highly accurate, simple, fast, and reproducible. It tests all gantry angles and provides a suitable automatic analysis and correction tool to improve EPID dosimetry and perform comprehensive linac QA for arc treatments.« less
NASA Astrophysics Data System (ADS)
Schilling, Kurt G.; Nath, Vishwesh; Blaber, Justin; Harrigan, Robert L.; Ding, Zhaohua; Anderson, Adam W.; Landman, Bennett A.
2017-02-01
High-angular-resolution diffusion-weighted imaging (HARDI) MRI acquisitions have become common for use with higher order models of diffusion. Despite successes in resolving complex fiber configurations and probing microstructural properties of brain tissue, there is no common consensus on the optimal b-value and number of diffusion directions to use for these HARDI methods. While this question has been addressed by analysis of the diffusion-weighted signal directly, it is unclear how this translates to the information and metrics derived from the HARDI models themselves. Using a high angular resolution data set acquired at a range of b-values, and repeated 11 times on a single subject, we study how the b-value and number of diffusion directions impacts the reproducibility and precision of metrics derived from Q-ball imaging, a popular HARDI technique. We find that Q-ball metrics associated with tissue microstructure and white matter fiber orientation are sensitive to both the number of diffusion directions and the spherical harmonic representation of the Q-ball, and often are biased when under sampled. These results can advise researchers on appropriate acquisition and processing schemes, particularly when it comes to optimizing the number of diffusion directions needed for metrics derived from Q-ball imaging.
Edge-directed inference for microaneurysms detection in digital fundus images
NASA Astrophysics Data System (ADS)
Huang, Ke; Yan, Michelle; Aviyente, Selin
2007-03-01
Microaneurysms (MAs) detection is a critical step in diabetic retinopathy screening, since MAs are the earliest visible warning of potential future problems. A variety of algorithms have been proposed for MAs detection in mass screening. Different methods have been proposed for MAs detection. The core technology for most of existing methods is based on a directional mathematical morphological operation called "Top-Hat" filter that requires multiple filtering operations at each pixel. Background structure, uneven illumination and noise often cause confusion between MAs and some non-MA structures and limits the applicability of the filter. In this paper, a novel detection framework based on edge directed inference is proposed for MAs detection. The candidate MA regions are first delineated from the edge map of a fundus image. Features measuring shape, brightness and contrast are extracted for each candidate MA region to better exclude false detection from true MAs. Algorithmic analysis and empirical evaluation reveal that the proposed edge directed inference outperforms the "Top-Hat" based algorithm in both detection accuracy and computational speed.
SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, O; Valdes, G; Yin, L
Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less
Diffraction imaging for in situ characterization of double-crystal X-ray monochromators
Stoupin, Stanislav; Liu, Zunping; Heald, Steve M.; ...
2015-10-30
In this paper, imaging of the Bragg-reflected X-ray beam is proposed and validated as an in situ method for characterization of the performance of double-crystal monochromators under the heat load of intense synchrotron radiation. A sequence of images is collected at different angular positions on the reflectivity curve of the second crystal and analyzed. The method provides rapid evaluation of the wavefront of the exit beam, which relates to local misorientation of the crystal planes along the beam footprint on the thermally distorted first crystal. The measured misorientation can be directly compared with the results of finite element analysis. Finally,more » the imaging method offers an additional insight into the local intrinsic crystal quality over the footprint of the incident X-ray beam.« less
Random forest regression for magnetic resonance image synthesis.
Jog, Amod; Carass, Aaron; Roy, Snehashis; Pham, Dzung L; Prince, Jerry L
2017-01-01
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T 2 -weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T 2 -weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets. Copyright © 2016 Elsevier B.V. All rights reserved.
Positron emission imaging device and method of using the same
Bingham, Philip R.; Mullens, James Allen
2013-01-15
An imaging system and method of imaging are disclosed. The imaging system can include an external radiation source producing pairs of substantially simultaneous radiation emissions of a picturization emission and a verification emissions at an emission angle. The imaging system can also include a plurality of picturization sensors and at least one verification sensor for detecting the picturization and verification emissions, respectively. The imaging system also includes an object stage is arranged such that a picturization emission can pass through an object supported on said object stage before being detected by one of said plurality of picturization sensors. A coincidence system and a reconstruction system can also be included. The coincidence can receive information from the picturization and verification sensors and determine whether a detected picturization emission is direct radiation or scattered radiation. The reconstruction system can produce a multi-dimensional representation of an object imaged with the imaging system.
Imaging study on acupuncture points
NASA Astrophysics Data System (ADS)
Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.
2009-09-01
The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.
Radar backscatter from the sea: Controlled experiments
NASA Astrophysics Data System (ADS)
Moore, R. K.
1992-04-01
The subwindowing method of modelling synthetic-aperture-radar (SAR) imaging of ocean waves was extended to allow wave propagation in arbitrary directions. Simulated images show that the SAR image response to swells that are imaged by velocity bunching is reduced by random smearing due to wind-generated waves. The magnitude of this response is not accurately predicted by introducing a finite coherence time in the radar backscatter. The smearing does not affect the imaging of waves by surface radar cross-section modulation, and is independent of the wind direction. Adjusting the focus of the SAR processor introduces an offset in the image response of the surface scatters. When adjusted by one-half the azimuthal phase velocity of the wave, this compensates the incoherent advance of the wave being imaged, leading to a higher image contrast. The azimuthal cut-off and range rotation of the spectral peak are predicted when the imaging of wind-generated wave trains is simulated. The simulated images suggest that velocity bunching and azimuthal smearing are strongly interdependent, and cannot be included in a model separately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokorny, M.; Rebicek, J.; Klemes, J.
2015-10-15
This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axismore » of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time.« less
Using 3 Tesla magnetic resonance imaging in the pre-operative evaluation of tongue carcinoma.
Moreno, K F; Cornelius, R S; Lucas, F V; Meinzen-Derr, J; Patil, Y J
2017-09-01
This study aimed to evaluate the role of 3 Tesla magnetic resonance imaging in predicting tongue tumour thickness via direct and reconstructed measures, and their correlations with corresponding histological measures, nodal metastasis and extracapsular spread. A prospective study was conducted of 25 patients with histologically proven squamous cell carcinoma of the tongue and pre-operative 3 Tesla magnetic resonance imaging from 2009 to 2012. Correlations between 3 Tesla magnetic resonance imaging and histological measures of tongue tumour thickness were assessed using the Pearson correlation coefficient: r values were 0.84 (p < 0.0001) and 0.81 (p < 0.0001) for direct and reconstructed measurements, respectively. For magnetic resonance imaging, direct measures of tumour thickness (mean ± standard deviation, 18.2 ± 7.3 mm) did not significantly differ from the reconstructed measures (mean ± standard deviation, 17.9 ± 7.2 mm; r = 0.879). Moreover, 3 Tesla magnetic resonance imaging had 83 per cent sensitivity, 82 per cent specificity, 82 per cent accuracy and a 90 per cent negative predictive value for detecting cervical lymph node metastasis. In this cohort, 3 Tesla magnetic resonance imaging measures of tumour thickness correlated highly with the corresponding histological measures. Further, 3 Tesla magnetic resonance imaging was an effective method of detecting malignant adenopathy with extracapsular spread.
Laboratory Demonstration of Axicon-Lens Coronagraph
NASA Astrophysics Data System (ADS)
Choi, Jaeho; Jea, Geonho
2018-01-01
The results of laboratory based experiments of the proposed coronagraph using axicon-lenses that is conjunction with a method of noninterferometric quantitative phase imaging for direct imaging of exoplanets is will present. The light source is passing through tiny holes drilled on the thin metal plate is used as the simulated stellar and its companions. Those diffracted light at the edge of the holes bears a similarity to the light from the bright stellar. Those images are evaginated about the optical axis after the maximum focal length of the first axicon lens. Then the evaginated images of have cut off using the motorized iris which means the suppressed the central stellar light preferentially. Various length between the holes which represent the angular distance are examined. The laboratory experimental results are shown that the axicon-lens coronagraph has feature of ability to achieve the smaller IWA than l/D and high-contrast direct imaging. The laboratory based axicon-lens coronagraph imaging support the symbolic computation results which has potential in direct imaging for finding exoplanet and various astrophysical activities. The setup of the coronagraph is simple to build and is durable to operate. Moreover it can be transported the planets images to a broadband spectrometric instrument that able to investigate the constituent of the planetary system.
Calculation of grain boundary normals directly from 3D microstructure images
Lieberman, E. J.; Rollett, A. D.; Lebensohn, R. A.; ...
2015-03-11
The determination of grain boundary normals is an integral part of the characterization of grain boundaries in polycrystalline materials. These normal vectors are difficult to quantify due to the discretized nature of available microstructure characterization techniques. The most common method to determine grain boundary normals is by generating a surface mesh from an image of the microstructure, but this process can be slow, and is subject to smoothing issues. A new technique is proposed, utilizing first order Cartesian moments of binary indicator functions, to determine grain boundary normals directly from a voxelized microstructure image. In order to validate the accuracymore » of this technique, the surface normals obtained by the proposed method are compared to those generated by a surface meshing algorithm. Specifically, the local divergence between the surface normals obtained by different variants of the proposed technique and those generated from a surface mesh of a synthetic microstructure constructed using a marching cubes algorithm followed by Laplacian smoothing is quantified. Next, surface normals obtained with the proposed method from a measured 3D microstructure image of a Ni polycrystal are used to generate grain boundary character distributions (GBCD) for Σ3 and Σ9 boundaries, and compared to the GBCD generated using a surface mesh obtained from the same image. Finally, the results show that the proposed technique is an efficient and accurate method to determine voxelized fields of grain boundary normals.« less
Conductivity map from scanning tunneling potentiometry.
Zhang, Hao; Li, Xianqi; Chen, Yunmei; Durand, Corentin; Li, An-Ping; Zhang, X-G
2016-08-01
We present a novel method for extracting two-dimensional (2D) conductivity profiles from large electrochemical potential datasets acquired by scanning tunneling potentiometry of a 2D conductor. The method consists of a data preprocessing procedure to reduce/eliminate noise and a numerical conductivity reconstruction. The preprocessing procedure employs an inverse consistent image registration method to align the forward and backward scans of the same line for each image line followed by a total variation (TV) based image restoration method to obtain a (nearly) noise-free potential from the aligned scans. The preprocessed potential is then used for numerical conductivity reconstruction, based on a TV model solved by accelerated alternating direction method of multiplier. The method is demonstrated on a measurement of the grain boundary of a monolayer graphene, yielding a nearly 10:1 ratio for the grain boundary resistivity over bulk resistivity.
NASA Astrophysics Data System (ADS)
Nguyen, D. T.; Bertholet, J.; Kim, J.-H.; O'Brien, R.; Booth, J. T.; Poulsen, P. R.; Keall, P. J.
2018-01-01
Increasing evidence suggests that intrafraction tumour motion monitoring needs to include both 3D translations and 3D rotations. Presently, methods to estimate the rotation motion require the 3D translation of the target to be known first. However, ideally, translation and rotation should be estimated concurrently. We present the first method to directly estimate six-degree-of-freedom (6DoF) motion from the target’s projection on a single rotating x-ray imager in real-time. This novel method is based on the linear correlations between the superior-inferior translations and the motion in the other five degrees-of-freedom. The accuracy of the method was evaluated in silico with 81 liver tumour motion traces from 19 patients with three implanted markers. The ground-truth motion was estimated using the current gold standard method where each marker’s 3D position was first estimated using a Gaussian probability method, and the 6DoF motion was then estimated from the 3D positions using an iterative method. The 3D position of each marker was projected onto a gantry-mounted imager with an imaging rate of 11 Hz. After an initial 110° gantry rotation (200 images), a correlation model between the superior-inferior translations and the five other DoFs was built using a least square method. The correlation model was then updated after each subsequent frame to estimate 6DoF motion in real-time. The proposed algorithm had an accuracy (±precision) of -0.03 ± 0.32 mm, -0.01 ± 0.13 mm and 0.03 ± 0.52 mm for translations in the left-right (LR), superior-inferior (SI) and anterior-posterior (AP) directions respectively; and, 0.07 ± 1.18°, 0.07 ± 1.00° and 0.06 ± 1.32° for rotations around the LR, SI and AP axes respectively on the dataset. The first method to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager was devised. The algorithm was evaluated using 81 motion traces from 19 liver patients and was found to have sub-mm and sub-degree accuracy.
NASA Astrophysics Data System (ADS)
Fuller, Clifton David; Thomas, Charles R., Jr.; Schwartz, Scott; Golden, Nanalei; Ting, Joe; Wong, Adrian; Erdogmus, Deniz; Scarbrough, Todd J.
2006-10-01
Several measurement techniques have been developed to address the capability for target volume reduction via target localization in image-guided radiotherapy; among these have been ultrasound (US) and fiducial marker (FM) software-assisted localization. In order to assess interchangeability between methods, US and FM localization were compared using established techniques for determination of agreement between measurement methods when a 'gold-standard' comparator does not exist, after performing both techniques daily on a sequential series of patients. At least 3 days prior to CT simulation, four gold seeds were placed within the prostate. FM software-assisted localization utilized the ExacTrac X-Ray 6D (BrainLab AG, Germany) kVp x-ray image acquisition system to determine prostate position; US prostate targeting was performed on each patient using the SonArray (Varian, Palo Alto, CA). Patients were aligned daily using laser alignment of skin marks. Directional shifts were then calculated by each respective system in the X, Y and Z dimensions before each daily treatment fraction, previous to any treatment or couch adjustment, as well as a composite vector of displacement. Directional shift agreement in each axis was compared using Altman-Bland limits of agreement, Lin's concordance coefficient with Partik's grading schema, and Deming orthogonal bias-weighted correlation methodology. 1019 software-assisted shifts were suggested by US and FM in 39 patients. The 95% limits of agreement in X, Y and Z axes were ±9.4 mm, ±11.3 mm and ±13.4, respectively. Three-dimensionally, measurements agreed within 13.4 mm in 95% of all paired measures. In all axes, concordance was graded as 'poor' or 'unacceptable'. Deming regression detected proportional bias in both directional axes and three-dimensional vectors. Our data suggest substantial differences between US and FM image-guided measures and subsequent suggested directional shifts. Analysis reveals that the vast majority of all individual US and FM directional measures may be expected to agree with each other within a range of 1-1.5 cm. Since neither system represents a gold standard, clinical judgment must dictate whether such a difference is of import. As IMRT protocols seek dose escalation and PTV reduction predicated on US- and FM-guided imaging, future studies are needed to address these potential clinically relevant issues regarding the interchangeability and accuracy of novel positional verification techniques. Comparison series with multiple image-guidance systems are needed to refine comparisons between targeting methods. However, we do not advocate interchangeability of US and FM localization methods. Portions of this data were presented at the American Society of Clinical Oncology/American Society for Therapeutic Radiology and Oncology/Society of Surgical Oncology 2006 Prostate Cancer Symposium, San Francisco, CA, USA.
Temperature imaging with ultrasonic transmission tomography for treatment control
NASA Astrophysics Data System (ADS)
Chu, Zheqi; Pinter, Stephen. Z.; Yuan, Jie; Scarpelli, Matthew L.; Kripfgans, Oliver D.; Fowlkes, J. Brian; Duric, Neb; Carson, Paul L.
2017-03-01
Hyperthermia is a promising method to enhance chemo- or radiation therapy of breast cancer and the time-temperature profile in the target and surrounding areas is the primary monitoring method. Unlike with thermal ablation of lesions, in hyperthermia there are not good alternative treatment monitoring quantities. However, there is less problem with non-monotonic thermal coefficients of speed of sound used with ultrasonic imaging of temperature. This paper tests a long discussed but little investigated method of imaging temperature using speed of sound and proposes methods of reducing edge enhancement artifacts in the temperature image. Normally, when directly using the speed of sound to reconstruct the temperature image around the tumor, there will be an abnormal bipolar edge enhancement along the boundary between two materials with different speeds of sound at a given temperature. This due to partial volume effects and can be diminished by regularized, weighted deconvolution. An initial, manual deconvolution is shown, as well as an EMD (Empirical Mode Decomposition) method. Here we use the continuity and other constraints to choose the coefficient, reprocess the temperature field image and take the mean variations of the temperature in the adjacent pixels as the judgment criteria. Both methods effectively reduce the edge enhancement and produce a more precise image of temperature.
On-Chip Imaging of Schistosoma haematobium Eggs in Urine for Diagnosis by Computer Vision
Linder, Ewert; Grote, Anne; Varjo, Sami; Linder, Nina; Lebbad, Marianne; Lundin, Mikael; Diwan, Vinod; Hannuksela, Jari; Lundin, Johan
2013-01-01
Background Microscopy, being relatively easy to perform at low cost, is the universal diagnostic method for detection of most globally important parasitic infections. As quality control is hard to maintain, misdiagnosis is common, which affects both estimates of parasite burdens and patient care. Novel techniques for high-resolution imaging and image transfer over data networks may offer solutions to these problems through provision of education, quality assurance and diagnostics. Imaging can be done directly on image sensor chips, a technique possible to exploit commercially for the development of inexpensive “mini-microscopes”. Images can be transferred for analysis both visually and by computer vision both at point-of-care and at remote locations. Methods/Principal Findings Here we describe imaging of helminth eggs using mini-microscopes constructed from webcams and mobile phone cameras. The results show that an inexpensive webcam, stripped off its optics to allow direct application of the test sample on the exposed surface of the sensor, yields images of Schistosoma haematobium eggs, which can be identified visually. Using a highly specific image pattern recognition algorithm, 4 out of 5 eggs observed visually could be identified. Conclusions/Significance As proof of concept we show that an inexpensive imaging device, such as a webcam, may be easily modified into a microscope, for the detection of helminth eggs based on on-chip imaging. Furthermore, algorithms for helminth egg detection by machine vision can be generated for automated diagnostics. The results can be exploited for constructing simple imaging devices for low-cost diagnostics of urogenital schistosomiasis and other neglected tropical infectious diseases. PMID:24340107
NASA Astrophysics Data System (ADS)
Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang
2016-10-01
We report a new method, polarization parameters indirect microscopic imaging with a high transmission infrared light source, to detect the morphology and component of human skin. A conventional reflection microscopic system is used as the basic optical system, into which a polarization-modulation mechanics is inserted and a high transmission infrared light source is utilized. The near-field structural characteristics of human skin can be delivered by infrared waves and material coupling. According to coupling and conduction physics, changes of the optical wave parameters can be calculated and curves of the intensity of the image can be obtained. By analyzing the near-field polarization parameters in nanoscale, we can finally get the inversion images of human skin. Compared with the conventional direct optical microscope, this method can break diffraction limit and achieve a super resolution of sub-100nm. Besides, the method is more sensitive to the edges, wrinkles, boundaries and impurity particles.
Depth profile measurement with lenslet images of the plenoptic camera
NASA Astrophysics Data System (ADS)
Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei
2018-03-01
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.