Image change detection systems, methods, and articles of manufacture
Jones, James L.; Lassahn, Gordon D.; Lancaster, Gregory D.
2010-01-05
Aspects of the invention relate to image change detection systems, methods, and articles of manufacture. According to one aspect, a method of identifying differences between a plurality of images is described. The method includes loading a source image and a target image into memory of a computer, constructing source and target edge images from the source and target images to enable processing of multiband images, displaying the source and target images on a display device of the computer, aligning the source and target edge images, switching displaying of the source image and the target image on the display device, to enable identification of differences between the source image and the target image.
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
A novel method for detecting light source for digital images forensic
NASA Astrophysics Data System (ADS)
Roy, A. K.; Mitra, S. K.; Agrawal, R.
2011-06-01
Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.
Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704
Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko
2018-03-21
To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Microseismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-07-01
At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.
Gutschwager, Berndt; Hollandt, Jörg
2015-12-20
We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Micro-seismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
anisotropic microseismic focal mechanism inversion by waveform imaging matching
NASA Astrophysics Data System (ADS)
Wang, L.; Chang, X.; Wang, Y.; Xue, Z.
2016-12-01
The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.
Open source tools for fluorescent imaging.
Hamilton, Nicholas A
2012-01-01
As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.
Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.
Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua
2018-02-01
Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
MIMO nonlinear ultrasonic tomography by propagation and backpropagation method.
Dong, Chengdong; Jin, Yuanwei
2013-03-01
This paper develops a fast ultrasonic tomographic imaging method in a multiple-input multiple-output (MIMO) configuration using the propagation and backpropagation (PBP) method. By this method, ultrasonic excitation signals from multiple sources are transmitted simultaneously to probe the objects immersed in the medium. The scattering signals are recorded by multiple receivers. Utilizing the nonlinear ultrasonic wave propagation equation and the received time domain scattered signals, the objects are to be reconstructed iteratively in three steps. First, the propagation step calculates the predicted acoustic potential data at the receivers using an initial guess. Second, the difference signal between the predicted value and the measured data is calculated. Third, the backpropagation step computes updated acoustical potential data by backpropagating the difference signal to the same medium computationally. Unlike the conventional PBP method for tomographic imaging where each source takes turns to excite the acoustical field until all the sources are used, the developed MIMO-PBP method achieves faster image reconstruction by utilizing multiple source simultaneous excitation. Furthermore, we develop an orthogonal waveform signaling method using a waveform delay scheme to reduce the impact of speckle patterns in the reconstructed images. By numerical experiments we demonstrate that the proposed MIMO-PBP tomographic imaging method results in faster convergence and achieves superior imaging quality.
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
A reference estimator based on composite sensor pattern noise for source device identification
NASA Astrophysics Data System (ADS)
Li, Ruizhe; Li, Chang-Tsun; Guan, Yu
2014-02-01
It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.
Forward model with space-variant of source size for reconstruction on X-ray radiographic image
NASA Astrophysics Data System (ADS)
Liu, Jin; Liu, Jun; Jing, Yue-feng; Xiao, Bo; Wei, Cai-hua; Guan, Yong-hong; Zhang, Xuan
2018-03-01
The Forward Imaging Technique is a method to solve the inverse problem of density reconstruction in radiographic imaging. In this paper, we introduce the forward projection equation (IFP model) for the radiographic system with areal source blur and detector blur. Our forward projection equation, based on X-ray tracing, is combined with the Constrained Conjugate Gradient method to form a new method for density reconstruction. We demonstrate the effectiveness of the new technique by reconstructing density distributions from simulated and experimental images. We show that for radiographic systems with source sizes larger than the pixel size, the effect of blur on the density reconstruction is reduced through our method and can be controlled within one or two pixels. The method is also suitable for reconstruction of non-homogeneousobjects.
Super-contrast photoacoustic resonance imaging
NASA Astrophysics Data System (ADS)
Gao, Fei; Zhang, Ruochong; Feng, Xiaohua; Liu, Siyu; Zheng, Yuanjin
2018-02-01
In this paper, a new imaging modality, named photoacoustic resonance imaging (PARI), is proposed and experimentally demonstrated. Being distinct from conventional single nanosecond laser pulse induced wideband PA signal, the proposed PARI method utilizes multi-burst modulated laser source to induce PA resonant signal with enhanced signal strength and narrower bandwidth. Moreover, imaging contrast could be clearly improved than conventional single-pulse laser based PA imaging by selecting optimum modulation frequency of the laser source, which originates from physical properties of different materials beyond the optical absorption coefficient. Specifically, the imaging steps is as follows: 1: Perform conventional PA imaging by modulating the laser source as a short pulse to identify the location of the target and the background. 2: Shine modulated laser beam on the background and target respectively to characterize their individual resonance frequency by sweeping the modulation frequency of the CW laser source. 3: Select the resonance frequency of the target as the modulation frequency of the laser source, perform imaging and get the first PARI image. Then choose the resonance frequency of the background as the modulation frequency of the laser source, perform imaging and get the second PARI image. 4: subtract the first PARI image from the second PARI image, then we get the contrast-enhanced PARI results over the conventional PA imaging in step 1. Experimental validation on phantoms have been performed to show the merits of the proposed PARI method with much improved image contrast.
NASA Astrophysics Data System (ADS)
Gutschwager, Berndt; Hollandt, Jörg
2017-01-01
We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented.
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
Objected-oriented remote sensing image classification method based on geographic ontology model
NASA Astrophysics Data System (ADS)
Chu, Z.; Liu, Z. J.; Gu, H. Y.
2016-11-01
Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-02-01
Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.
NASA Technical Reports Server (NTRS)
Fares, Nabil; Li, Victor C.
1986-01-01
An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.
An efficient method to compute microlensed light curves for point sources
NASA Technical Reports Server (NTRS)
Witt, Hans J.
1993-01-01
We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Method for large and rapid terahertz imaging
Williams, Gwyn P.; Neil, George R.
2013-01-29
A method of large-scale active THz imaging using a combination of a compact high power THz source (>1 watt), an optional optical system, and a camera for the detection of reflected or transmitted THz radiation, without the need for the burdensome power source or detector cooling systems required by similar prior art such devices. With such a system, one is able to image, for example, a whole person in seconds or less, whereas at present, using low power sources and scanning techniques, it takes several minutes or even hours to image even a 1 cm.times.1 cm area of skin.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
Background derivation and image flattening: getimages
NASA Astrophysics Data System (ADS)
Men'shchikov, A.
2017-11-01
Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width Xλ. The latter is a single free parameter of getimages that can be evaluated manually from the observed image ℐλ. The median filtering algorithm provides a background image \\tilde{Bλ} for structures of all widths below Xλ. The same median filtering procedure applied to an image of standard deviations 𝓓λ derived from a background-subtracted image \\tilde{Sλ} results in a flattening image \\tilde{Fλ}. Finally, a flattened detection image I{λD} = \\tilde{Sλ}/\\tilde{Fλ} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Raw data normalization for a multi source inverse geometry CT system
Baek, Jongduk; De Man, Bruno; Harrison, Daniel; Pelc, Norbert J.
2015-01-01
A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise. PMID:25837090
Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm
NASA Astrophysics Data System (ADS)
Sarika, G.; Unnithan, Harikuttan; Peter, Smitha
2011-10-01
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.
Apparatus and method for high dose rate brachytherapy radiation treatment
Macey, Daniel J.; Majewski, Stanislaw; Weisenberger, Andrew G.; Smith, Mark Frederick; Kross, Brian James
2005-01-25
A method and apparatus for the in vivo location and tracking of a radioactive seed source during and after brachytherapy treatment. The method comprises obtaining multiple views of the seed source in a living organism using: 1) a single PSPMT detector that is exposed through a multiplicity of pinholes thereby obtaining a plurality of images from a single angle; 2) a single PSPMT detector that may obtain an image through a single pinhole or a plurality of pinholes from a plurality of angles through movement of the detector; or 3) a plurality of PSPMT detectors that obtain a plurality of views from different angles simultaneously or virtually simultaneously. The plurality of images obtained from these various techniques, through angular displacement of the various acquired images, provide the information required to generate the three dimensional images needed to define the location of the radioactive seed source within the body of the living organism.
Single-random-phase holographic encryption of images
NASA Astrophysics Data System (ADS)
Tsang, P. W. M.
2017-02-01
In this paper, a method is proposed for encrypting an optical image onto a phase-only hologram, utilizing a single random phase mask as the private encryption key. The encryption process can be divided into 3 stages. First the source image to be encrypted is scaled in size, and pasted onto an arbitrary position in a larger global image. The remaining areas of the global image that are not occupied by the source image could be filled with randomly generated contents. As such, the global image as a whole is very different from the source image, but at the same time the visual quality of the source image is preserved. Second, a digital Fresnel hologram is generated from the new image, and converted into a phase-only hologram based on bi-directional error diffusion. In the final stage, a fixed random phase mask is added to the phase-only hologram as the private encryption key. In the decryption process, the global image together with the source image it contained, can be reconstructed from the phase-only hologram if it is overlaid with the correct decryption key. The proposed method is highly resistant to different forms of Plain-Text-Attacks, which are commonly used to deduce the encryption key in existing holographic encryption process. In addition, both the encryption and the decryption processes are simple and easy to implement.
The Utility of the Extended Images in Ambient Seismic Wavefield Migration
NASA Astrophysics Data System (ADS)
Girard, A. J.; Shragge, J. C.
2015-12-01
Active-source 3D seismic migration and migration velocity analysis (MVA) are robust and highly used methods for imaging Earth structure. One class of migration methods uses extended images constructed by incorporating spatial and/or temporal wavefield correlation lags to the imaging conditions. These extended images allow users to directly assess whether images focus better with different parameters, which leads to MVA techniques that are based on the tenets of adjoint-state theory. Under certain conditions (e.g., geographical, cultural or financial), however, active-source methods can prove impractical. Utilizing ambient seismic energy that naturally propagates through the Earth is an alternate method currently used in the scientific community. Thus, an open question is whether extended images are similarly useful for ambient seismic migration processing and verifying subsurface velocity models, and whether one can similarly apply adjoint-state methods to perform ambient migration velocity analysis (AMVA). Herein, we conduct a number of numerical experiments that construct extended images from ambient seismic recordings. We demonstrate that, similar to active-source methods, there is a sensitivity to velocity in ambient seismic recordings in the migrated extended image domain. In synthetic ambient imaging tests with varying degrees of error introduced to the velocity model, the extended images are sensitive to velocity model errors. To determine the extent of this sensitivity, we utilize acoustic wave-equation propagation and cross-correlation-based migration methods to image weak body-wave signals present in the recordings. Importantly, we have also observed scenarios where non-zero correlation lags show signal while zero-lags show none. This may be a valuable missing piece for ambient migration techniques that have yielded largely inconclusive results, and might be an important piece of information for performing AMVA from ambient seismic recordings.
Multisource least-squares reverse-time migration with structure-oriented filtering
NASA Astrophysics Data System (ADS)
Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong
2016-09-01
The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Source detection in astronomical images by Bayesian model comparison
NASA Astrophysics Data System (ADS)
Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher
2014-12-01
The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.
Guided filter-based fusion method for multiexposure images
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei
2016-11-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)
1999-01-01
A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, X; Lei, Y; Zheng, D
2016-06-15
Purpose: High Dose Rate (HDR) brachytherapy poses a special challenge to radiation safety and quality assurance (QA) due to its high radioactivity, and it is thus critical to verify the HDR source location and its radioactive strength. This study demonstrates a new method for measuring HDR source location and radioactivity utilizing thermal imaging. A potential application would relate to HDR QA and safety improvement. Methods: Heating effects by an HDR source were studied using Finite Element Analysis (FEA). Thermal cameras were used to visualize an HDR source inside a plastic applicator made of polyvinylidene difluoride (PVDF). Using different source dwellmore » times, correlations between the HDR source strength and heating effects were studied, thus establishing potential daily QA criteria using thermal imaging Results: For an Ir1?2 source with a radioactivity of 10 Ci, the decay-induced heating power inside the source is ∼13.3 mW. After the HDR source was extended into the PVDF applicator and reached thermal equilibrium, thermal imaging visualized the temperature gradient of 10 K/cm along the PVDF applicator surface, which agreed with FEA modeling. For Ir{sup 192} source activities ranging from 4.20–10.20 Ci, thermal imaging could verify source activity with an accuracy of 6.3% with a dwell time of 10 sec, and an accuracy of 2.5 % with 100 sec. Conclusion: Thermal imaging is a feasible tool to visualize HDR source dwell positions and verify source integrity. Patient safety and treatment quality will be improved by integrating thermal measurements into HDR QA procedures.« less
Chandra, Rohit; Balasingham, Ilangko
2015-01-01
A microwave imaging-based technique for 3D localization of an in-body RF source is presented. Such a technique can be useful for localization of an RF source as in wireless capsule endoscopes for positioning of any abnormality in the gastrointestinal tract. Microwave imaging is used to determine the dielectric properties (relative permittivity and conductivity) of the tissues that are required for a precise localization. A 2D microwave imaging algorithm is used for determination of the dielectric properties. Calibration method is developed for removing any error due to the used 2D imaging algorithm on the imaging data of a 3D body. The developed method is tested on a simple 3D heterogeneous phantom through finite-difference-time-domain simulations. Additive white Gaussian noise at the signal-to-noise ratio of 30 dB is added to the simulated data to make them more realistic. The developed calibration method improves the imaging and the localization accuracy. Statistics on the localization accuracy are generated by randomly placing the RF source at various positions inside the small intestine of the phantom. The cumulative distribution function of the localization error is plotted. In 90% of the cases, the localization accuracy was found within 1.67 cm, showing the capability of the developed method for 3D localization.
Thermal image analysis using the serpentine method
NASA Astrophysics Data System (ADS)
Koprowski, Robert; Wilczyński, Sławomir
2018-03-01
Thermal imaging is an increasingly widespread alternative to other imaging methods. As a supplementary method in diagnostics, it can be used both statically and with dynamic temperature changes. The paper proposes a new image analysis method that allows for the acquisition of new diagnostic information as well as object segmentation. The proposed serpentine analysis uses known and new methods of image analysis and processing proposed by the authors. Affine transformations of an image and subsequent Fourier analysis provide a new diagnostic quality. The method is fully repeatable and automatic and independent of inter-individual variability in patients. The segmentation results are by 10% better than those obtained from the watershed method and the hybrid segmentation method based on the Canny detector. The first and second harmonics of serpentine analysis enable to determine the type of temperature changes in the region of interest (gradient, number of heat sources etc.). The presented serpentine method provides new quantitative information on thermal imaging and more. Since it allows for image segmentation and designation of contact points of two and more heat sources (local minimum), it can be used to support medical diagnostics in many areas of medicine.
A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.
Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng
2015-01-01
Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.
Yan, Gang; Zhou, Li
2018-02-21
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.
Zhou, Li
2018-01-01
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, H; Xing, L; Liang, Z
Purpose: To investigate a novel low-dose CT (LdCT) image reconstruction strategy for lung CT imaging in radiation therapy. Methods: The proposed approach consists of four steps: (1) use the traditional filtered back-projection (FBP) method to reconstruct the LdCT image; (2) calculate structure similarity (SSIM) index between the FBP-reconstructed LdCT image and a set of normal-dose CT (NdCT) images, and select the NdCT image with the highest SSIM as the learning source; (3) segment the NdCT source image into lung and outside tissue regions via simple thresholding, and adopt multiple linear regression to learn high-order Markov random field (MRF) pattern formore » each tissue region in the NdCT source image; (4) segment the FBP-reconstructed LdCT image into lung and outside regions as well, and apply the learnt MRF prior in each tissue region for statistical iterative reconstruction of the LdCT image following the penalized weighted least squares (PWLS) framework. Quantitative evaluation of the reconstructed images was based on the signal-to-noise ratio (SNR), local binary pattern (LBP) and histogram of oriented gradients (HOG) metrics. Results: It was observed that lung and outside tissue regions have different MRF patterns predicted from the NdCT. Visual inspection showed that our method obviously outperformed the traditional FBP method. Comparing with the region-smoothing PWLS method, our method has, in average, 13% increase in SNR, 15% decrease in LBP difference, and 12% decrease in HOG difference from reference standard for all regions of interest, which indicated the superior performance of the proposed method in terms of image resolution and texture preservation. Conclusion: We proposed a novel LdCT image reconstruction method by learning similar image characteristics from a set of NdCT images, and the to-be-learnt NdCT image does not need to be scans from the same subject. This approach is particularly important for enhancing image quality in radiation therapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rack, Alexander; Weitkamp, Timm; European Synchrotron Radiation Facility, BP 220, F-38043 Grenoble Cedex
2009-03-10
Diffraction and transmission synchrotron imaging methods have proven to be highly suitable for investigations in materials research and non-destructive evaluation. The high flux and spatial coherence of X-rays from modern synchrotron light sources allows one to work using high resolution and different contrast modalities. This article gives a short overview of different transmission and diffraction imaging methods with high potential for industrial applications, now available for commercial access via the German light source ANKA (Forschungszentrum Karlsruhe) and its new department ANKA Commercial Service (ANKA COS, http://www.anka-cos.de)
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method
NASA Astrophysics Data System (ADS)
Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao
2016-09-01
To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.
MR-based source localization for MR-guided HDR brachytherapy
NASA Astrophysics Data System (ADS)
Beld, E.; Moerland, M. A.; Zijlstra, F.; Viergever, M. A.; Lagendijk, J. J. W.; Seevinck, P. R.
2018-04-01
For the purpose of MR-guided high-dose-rate (HDR) brachytherapy, a method for real-time localization of an HDR brachytherapy source was developed, which requires high spatial and temporal resolutions. MR-based localization of an HDR source serves two main aims. First, it enables real-time treatment verification by determination of the HDR source positions during treatment. Second, when using a dummy source, MR-based source localization provides an automatic detection of the source dwell positions after catheter insertion, allowing elimination of the catheter reconstruction procedure. Localization of the HDR source was conducted by simulation of the MR artifacts, followed by a phase correlation localization algorithm applied to the MR images and the simulated images, to determine the position of the HDR source in the MR images. To increase the temporal resolution of the MR acquisition, the spatial resolution was decreased, and a subpixel localization operation was introduced. Furthermore, parallel imaging (sensitivity encoding) was applied to further decrease the MR scan time. The localization method was validated by a comparison with CT, and the accuracy and precision were investigated. The results demonstrated that the described method could be used to determine the HDR source position with a high accuracy (0.4–0.6 mm) and a high precision (⩽0.1 mm), at high temporal resolutions (0.15–1.2 s per slice). This would enable real-time treatment verification as well as an automatic detection of the source dwell positions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca
Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less
LEDs as light source: examining quality of acquired images
NASA Astrophysics Data System (ADS)
Bachnak, Rafic; Funtanilla, Jeng; Hernandez, Jose
2004-05-01
Recent advances in technology have made light emitting diodes (LEDs) viable in a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. This paper presents the results of comparing images taken by a videoscope using two different light sources. One of the sources is the internal metal halide lamp and the other is a LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. The paper will present the results and discuss the usefulness and shortcomings of various comparison methods.
Multiband super-resolution imaging of graded-index photonic crystal flat lens
NASA Astrophysics Data System (ADS)
Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun
2018-05-01
Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.
An image-based search for pulsars among Fermi unassociated LAT sources
NASA Astrophysics Data System (ADS)
Frail, D. A.; Ray, P. S.; Mooley, K. P.; Hancock, P.; Burnett, T. H.; Jagannathan, P.; Ferrara, E. C.; Intema, H. T.; de Gasperin, F.; Demorest, P. B.; Stovall, K.; McKinnon, M. M.
2018-03-01
We describe an image-based method that uses two radio criteria, compactness, and spectral index, to identify promising pulsar candidates among Fermi Large Area Telescope (LAT) unassociated sources. These criteria are applied to those radio sources from the Giant Metrewave Radio Telescope all-sky survey at 150 MHz (TGSS ADR1) found within the error ellipses of unassociated sources from the 3FGL catalogue and a preliminary source list based on 7 yr of LAT data. After follow-up interferometric observations to identify extended or variable sources, a list of 16 compact, steep-spectrum candidates is generated. An ongoing search for pulsations in these candidates, in gamma rays and radio, has found 6 ms pulsars and one normal pulsar. A comparison of this method with existing selection criteria based on gamma-ray spectral and variability properties suggests that the pulsar discovery space using Fermi may be larger than previously thought. Radio imaging is a hitherto underutilized source selection method that can be used, as with other multiwavelength techniques, in the search for Fermi pulsars.
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
Telescope for x ray and gamma ray studies in astrophysics
NASA Technical Reports Server (NTRS)
Weaver, W. D.; Desai, Upendra D.
1993-01-01
Imaging of x-rays has been achieved by various methods in astrophysics, nuclear physics, medicine, and material science. A new method for imaging x-ray and gamma-ray sources avoids the limitations of previously used imaging devices. Images are formed in optical wavelengths by using mirrors or lenses to reflect and refract the incoming photons. High energy x-ray and gamma-ray photons cannot be reflected except at grazing angles and pass through lenses without being refracted. Therefore, different methods must be used to image x-ray and gamma-ray sources. Techniques using total absorption, or shadow casting, can provide images in x-rays and gamma-rays. This new method uses a coder made of a pair of Fresnel zone plates and a detector consisting of a matrix of CsI scintillators and photodiodes. The Fresnel zone plates produce Moire patterns when illuminated by an off-axis source. These Moire patterns are deconvolved using a stepped sine wave fitting or an inverse Fourier transform. This type of coder provides the capability of an instantaneous image with sub-arcminute resolution while using a detector with only a coarse position-sensitivity. A matrix of the CsI/photodiode detector elements provides the necessary coarse position-sensitivity. The CsI/photodiode detector also allows good energy resolution. This imaging system provides advantages over previously used imaging devices in both performance and efficiency.
Temporal resolution improvement using PICCS in MDCT cardiac imaging
Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang
2009-01-01
The current paradigm for temporal resolution improvement is to add more source-detector units and∕or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120°, which is roughly 50% of the standard short-scan angular range (∼240° for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications. PMID:19610302
Temporal resolution improvement using PICCS in MDCT cardiac imaging.
Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang
2009-06-01
The current paradigm for temporal resolution improvement is to add more source-detector units and/or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120 degrees, which is roughly 50% of the standard short-scan angular range (approximately 240 degrees for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications.
Reduction of background clutter in structured lighting systems
Carlson, Jeffrey J.; Giles, Michael K.; Padilla, Denise D.; Davidson, Jr., Patrick A.; Novick, David K.; Wilson, Christopher W.
2010-06-22
Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging device and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.
A novel method for fast imaging of brain function, non-invasively, with light
NASA Astrophysics Data System (ADS)
Chance, Britton; Anday, Endla; Nioka, Shoko; Zhou, Shuoming; Hong, Long; Worden, Katherine; Li, C.; Murray, T.; Ovetsky, Y.; Pidikiti, D.; Thomas, R.
1998-05-01
Imaging of the human body by any non-invasive technique has been an appropriate goal of physics and medicine, and great success has been obtained with both Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) in brain imaging. Non-imaging responses to functional activation using near infrared spectroscopy of brain (fNIR) obtained in 1993 (Chance, et al. [1]) and in 1994 (Tamura, et al. [2]) are now complemented with images of pre-frontal and parietal stimulation in adults and pre-term neonates in this communication (see also [3]). Prior studies used continuous [4], pulsed [3] or modulated [5] light. The amplitude and phase cancellation of optical patterns as demonstrated for single source detector pairs affords remarkable sensitivity of small object detection in model systems [6]. The methods have now been elaborated with multiple source detector combinations (nine sources, four detectors). Using simple back projection algorithms it is now possible to image sensorimotor and cognitive activation of adult and pre- and full-term neonate human brain function in times < 30 sec and with two dimensional resolutions of < 1 cm in two dimensional displays. The method can be used in evaluation of adult and neonatal cerebral dysfunction in a simple, portable and affordable method that does not require immobilization, as contrasted to MRI and PET.
Ghost Images in Helioseismic Holography? Toy Models in a Uniform Medium
NASA Astrophysics Data System (ADS)
Yang, Dan
2018-02-01
Helioseismic holography is a powerful technique used to probe the solar interior based on estimations of the 3D wavefield. The Porter-Bojarski holography, which is a well-established method used in acoustics to recover sources and scatterers in 3D, is also an estimation of the wavefield, and hence it has the potential of being applied to helioseismology. Here we present a proof-of-concept study, where we compare helioseismic holography and Porter-Bojarski holography under the assumption that the waves propagate in a homogeneous medium. We consider the problem of locating a point source of wave excitation inside a sphere. Under these assumptions, we find that the two imaging methods have the same capability of locating the source, with the exception that helioseismic holography suffers from "ghost images" ( i.e. artificial peaks away from the source location). We conclude that Porter-Bojarski holography may improve the method currently used in helioseismology.
NASA Astrophysics Data System (ADS)
Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald
2009-08-01
The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.
Fast neutron imaging device and method
Popov, Vladimir; Degtiarenko, Pavel; Musatov, Igor V.
2014-02-11
A fast neutron imaging apparatus and method of constructing fast neutron radiography images, the apparatus including a neutron source and a detector that provides event-by-event acquisition of position and energy deposition, and optionally timing and pulse shape for each individual neutron event detected by the detector. The method for constructing fast neutron radiography images utilizes the apparatus of the invention.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsson, Daniel H.; Lundstroem, Ulf; Burvall, Anna
Purpose: Small-animal studies require images with high spatial resolution and high contrast due to the small scale of the structures. X-ray imaging systems for small animals are often limited by the microfocus source. Here, the authors investigate the applicability of liquid-metal-jet x-ray sources for such high-resolution small-animal imaging, both in tomography based on absorption and in soft-tissue tumor imaging based on in-line phase contrast. Methods: The experimental arrangement consists of a liquid-metal-jet x-ray source, the small-animal object on a rotating stage, and an imaging detector. The source-to-object and object-to-detector distances are adjusted for the preferred contrast mechanism. Two different liquid-metal-jetmore » sources are used, one circulating a Ga/In/Sn alloy and the other an In/Ga alloy for higher penetration through thick tissue. Both sources are operated at 40-50 W electron-beam power with {approx}7 {mu}m x-ray spots, providing high spatial resolution in absorption imaging and high spatial coherence for the phase-contrast imaging. Results: High-resolution absorption imaging is demonstrated on mice with CT, showing 50 {mu}m bone details in the reconstructed slices. High-resolution phase-contrast soft-tissue imaging shows clear demarcation of mm-sized tumors at much lower dose than is required in absorption. Conclusions: This is the first application of liquid-metal-jet x-ray sources for whole-body small-animal x-ray imaging. In absorption, the method allows high-resolution tomographic skeletal imaging with potential for significantly shorter exposure times due to the power scalability of liquid-metal-jet sources. In phase contrast, the authors use a simple in-line arrangement to show distinct tumor demarcation of few-mm-sized tumors. This is, to their knowledge, the first small-animal tumor visualization with a laboratory phase-contrast system.« less
Simultaneous acquisition of differing image types
Demos, Stavros G
2012-10-09
A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.
Magnetoacoustic Tomography with Magnetic Induction for Electrical Conductivity based Tissue imaging
NASA Astrophysics Data System (ADS)
Mariappan, Leo
Electrical conductivity imaging of biological tissue has attracted considerable interest in recent years owing to research indicating that electrical properties, especially electrical conductivity and permittivity, are indicators of underlying physiological and pathological conditions in biological tissue. Also, the knowledge of electrical conductivity of biological tissue is of interest to researchers conducting electromagnetic source imaging and in design of devices that apply electromagnetic energy to the body such as MRI. So, the need for a non-invasive, high resolution impedance imaging method is highly desired. To address this need we have studied the magnetoacoustic tomography with magnetic induction (MAT-MI) method. In MAT-MI, the object is placed in a static and a dynamic magnetic field giving rise to ultrasound waves. The dynamic field induces eddy currents in the object, and the static field leads to generation of acoustic vibrations from Lorentz force on the induced currents. The acoustic vibrations are at the same frequency as the dynamic magnetic field, which is chosen to match the ultrasound frequency range. These ultrasound signals can be measured by ultrasound probes and are used to reconstruct MAT-MI acoustic source images using possible ultrasound imaging approaches .The reconstructed high spatial resolution image is indicative of the object's electrical conductivity contrast. We have investigated ultrasound imaging methods to reliably reconstruct the MAT-MI image under the practical conditions of limited bandwidth and transducer geometry. The corresponding imaging algorithm, computer simulation and experiments are developed to test the feasibility of these different methods. Also, in experiments, we have developed a system with the strong static field of an MRI magnet and a strong pulsed magnetic field to evaluate MAT-MI in biological tissue imaging. It can be seen from these simulations and experiments that conductivity boundary images with millimeter resolution can be reliably reconstructed with MAT-MI. Further, to estimate the conductivity distribution throughout the object, we reconstruct a vector source image corresponding to the induced eddy currents. As the current source is uniformly present throughout the object, we are able to reliably estimate the internal conductivity distribution for a more complete imaging. From the computer simulations and experiments it can be seen that MAT-MI method has the potential to be a clinically applicable, high resolution, non-invasive method for electrical conductivity imaging.
Optimization of air gap for two-dimensional imaging system using synchrotron radiation
NASA Astrophysics Data System (ADS)
Zeniya, Tsutomu; Takeda, Tohoru; Yu, Quanwen; Hyodo, Kazuyuki; Yuasa, Tetsuya; Aiyoshi, Yuji; Hiranaka, Yukio; Itai, Yuji; Akatsuka, Takao
2000-11-01
Since synchrotron radiation (SR) has several excellent properties such as high brilliance, broad continuous energy spectrum and small divergence, we can obtain x-ray images with high contrast and high spatial resolution by using of SR. In 2D imaging using SR, air gap method is very effective to reduce the scatter contamination. However, to use air gap method, the geometrical effect of finite source size of SR must be considered because spatial resolution of image is degraded by air gap. For 2D x-ray imaging with SR, x-ray mammography was chosen to examine the effect of air gap method. We theoretically discussed the optimization of air gap distance suing effective scatter point source model proposed by Muntz, and executed experiment with a newly manufactured monochromator with asymmetrical reflection and an imaging plate.
Koplay, Mustafa; Celik, Mahmut; Avcı, Ahmet; Erdogan, Hasan; Demir, Kenan; Sivri, Mesut; Nayman, Alaaddin
2015-01-01
We aimed to report the image quality, relationship between heart rate and image quality, amount of contrast agent given to the patients and radiation doses in coronary CT angiography (CTA) obtained by using high-pitch prospectively ECG-gated "Flash Spiral" technique (method A) or retrospectively ECG-gated technique (method B) using 128×2-slice dual-source CT. A total of 110 patients who were evaluated with method A and method B technique with a 128×2-detector dual-source CT device were included in the study. Patients were divided into three groups based on their heart rates during the procedure, and a relationship between heart rate and image quality were evaluated. The relationship between heart rate, gender and radiation dose received by the patients was compared. A total of 1760 segments were evaluated in terms of image quality. Comparison of the relationship between heart rate and image quality revealed a significant difference between heart rate <60 beats/min group and >75 beats/min group whereas <60 beats/min and 60-75 beats/min groups did not differ significantly. The average effective dose for coronary CTA was calculated as 1.11 mSv (0.47-2.01 mSv) for method A and 8.22 mSv (2.19-12.88 mSv) for method B. Method A provided high quality images with doses as low as <1 mSv in selected patients who have low heart rates with a high negative predictive value to rule out coronary artery disease. Although method B increases the amount of effective dose, it provides high diagnostic quality images for patients who have a high heart rate and arrhythmia which makes it is difficult to obtain images.
Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals
Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F.
2014-01-01
Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2–3 days and used 1–2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory. PMID:24904402
Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals.
Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F
2014-01-01
Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2-3 days and used 1-2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory.
Ptychographic imaging with partially coherent plasma EUV sources
NASA Astrophysics Data System (ADS)
Bußmann, Jan; Odstrčil, Michal; Teramoto, Yusuke; Juschkin, Larissa
2017-12-01
We report on high-resolution lens-less imaging experiments based on ptychographic scanning coherent diffractive imaging (CDI) method employing compact plasma sources developed for extreme ultraviolet (EUV) lithography applications. Two kinds of discharge sources were used in our experiments: a hollow-cathode-triggered pinch plasma source operated with oxygen and for the first time a laser-assisted discharge EUV source with a liquid tin target. Ptychographic reconstructions of different samples were achieved by applying constraint relaxation to the algorithm. Our ptychography algorithms can handle low spatial coherence and broadband illumination as well as compensate for the residual background due to plasma radiation in the visible spectral range. Image resolution down to 100 nm is demonstrated even for sparse objects, and it is limited presently by the sample structure contrast and the available coherent photon flux. We could extract material properties by the reconstruction of the complex exit-wave field, gaining additional information compared to electron microscopy or CDI with longer-wavelength high harmonic laser sources. Our results show that compact plasma-based EUV light sources of only partial spatial and temporal coherence can be effectively used for lens-less imaging applications. The reported methods may be applied in combination with reflectometry and scatterometry for high-resolution EUV metrology.
Fusion of infrared and visible images based on BEMD and NSDFB
NASA Astrophysics Data System (ADS)
Zhu, Pan; Huang, Zhanhua; Lei, Hai
2016-07-01
This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.
NASA Astrophysics Data System (ADS)
Comsa, Daria Craita
2008-10-01
There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.
Apparatus and method for a light direction sensor
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2011-01-01
The present invention provides a light direction sensor for determining the direction of a light source. The system includes an image sensor; a spacer attached to the image sensor, and a pattern mask attached to said spacer. The pattern mask has a slit pattern that as light passes through the slit pattern it casts a diffraction pattern onto the image sensor. The method operates by receiving a beam of light onto a patterned mask, wherein the patterned mask as a plurality of a slit segments. Then, diffusing the beam of light onto an image sensor and determining the direction of the light source.
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
Imaging method for monitoring delivery of high dose rate brachytherapy
Weisenberger, Andrew G; Majewski, Stanislaw
2012-10-23
A method for in-situ monitoring both the balloon/cavity and the radioactive source in brachytherapy treatment utilizing using at least one pair of miniature gamma cameras to acquire separate images of: 1) the radioactive source as it is moved in the tumor volume during brachytherapy; and 2) a relatively low intensity radiation source produced by either an injected radiopharmaceutical rendering cancerous tissue visible or from a radioactive solution filling a balloon surgically implanted into the cavity formed by the surgical resection of a tumor.
Wang, Hui; Xu, Yanan; Shi, Hongli
2018-03-15
Metal artifacts severely degrade CT image quality in clinical diagnosis, which are difficult to removed, especially for the beam hardening artifacts. The metal artifact reduction (MAR) based on prior images are the most frequently-used methods. However, there exists a lot misclassification in most prior images caused by absence of prior information such as spectrum distribution of X-ray beam source, especially when multiple or big metal are included. This work aims is to identify a more accurate prior image to improve image quality. The proposed method includes four steps. First, the metal image is segmented by thresholding an initial image, where the metal traces are identified in the initial projection data using the forward projection of the metal image. Second, the accurate absorbent model of certain metal image is calculated according to the spectrum distribution of certain X-ray beam source and energy-dependent attenuation coefficients of metal. Third, a new metal image is reconstructed by the general analytical reconstruction algorithm such as filtered back projection (FPB). The prior image is obtained by segmenting the difference image between the initial image and the new metal image into air, tissue and bone. Fourth, the initial projection data are normalized by dividing the projection data of prior image pixel to pixel. The final corrected image is obtained by interpolation, denormalization and reconstruction. Several clinical images with dental fillings and knee prostheses were used to evaluate the proposed algorithm and normalized metal artifact reduction (NMAR) and linear interpolation (LI) method. The results demonstrate the artifacts were reduced efficiently by the proposed method. The proposed method could obtain an exact prior image using the prior information about X-ray beam source and energy-dependent attenuation coefficients of metal. As a result, better performance of reducing beam hardening artifacts can be achieved. Moreover, the process of the proposed method is rather simple and little extra calculation burden is necessary. It has superiorities over other algorithms when include multiple and/or big implants.
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Marbjerg, Gerd; Brunskog, Jonas; Jeong, Cheol-Ho; Nilsson, Erling
2015-09-01
A model, combining acoustical radiosity and the image source method, including phase shifts on reflection, has been developed. The model is denoted Phased Acoustical Radiosity and Image Source Method (PARISM), and it has been developed in order to be able to model both specular and diffuse reflections with complex-valued and angle-dependent boundary conditions. This paper mainly describes the combination of the two models and the implementation of the angle-dependent boundary conditions. It furthermore describes how a pressure impulse response is obtained from the energy-based acoustical radiosity by regarding the model as being stochastic. Three methods of implementation are proposed and investigated, and finally, recommendations are made for their use. Validation of the image source method is done by comparison with finite element simulations of a rectangular room with a porous absorber ceiling. Results from the full model are compared with results from other simulation tools and with measurements. The comparisons of the full model are done for real-valued and angle-independent surface properties. The proposed model agrees well with both the measured results and the alternative theories, and furthermore shows a more realistic spatial variation than energy-based methods due to the fact that interference is considered.
NASA Astrophysics Data System (ADS)
Okuwaki, R.; Kasahara, A.; Yagi, Y.
2017-12-01
The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.
Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method
NASA Astrophysics Data System (ADS)
Li, Jiahang; Shen, Yang; Zhang, Wei
2018-02-01
At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.
A smartphone-based chip-scale microscope using ambient illumination.
Lee, Seung Ah; Yang, Changhuei
2014-08-21
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
A smartphone-based chip-scale microscope using ambient illumination
Lee, Seung Ah; Yang, Changhuei
2014-01-01
Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the imaging resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system. PMID:24964209
Edge enhancement of color images using a digital micromirror device.
Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A
2012-06-01
A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
Systems and methods for thermal imaging technique for measuring mixing of fluids
Booten, Charles; Tomerlin, Jeff; Winkler, Jon
2016-06-14
Systems and methods for thermal imaging for measuring mixing of fluids are provided. In one embodiment, a method for measuring mixing of gaseous fluids using thermal imaging comprises: positioning a thermal test medium parallel to a direction gaseous fluid flow from an outlet vent of a momentum source, wherein when the source is operating, the fluid flows across a surface of the medium; obtaining an ambient temperature value from a baseline thermal image of the surface; obtaining at least one operational thermal image of the surface when the fluid is flowing from the outlet vent across the surface, wherein the fluid has a temperature different than the ambient temperature; and calculating at least one temperature-difference fraction associated with at least a first position on the surface based on a difference between temperature measurements obtained from the at least one operational thermal image and the ambient temperature value.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
Time reversal imaging and cross-correlations techniques by normal mode theory
NASA Astrophysics Data System (ADS)
Montagner, J.; Fink, M.; Capdeville, Y.; Phung, H.; Larmat, C.
2007-12-01
Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and recently to seismic waves in seismology for earthquake imaging. The increasing power of computers and numerical methods (such as spectral element methods) enables one to simulate more and more accurately the propagation of seismic waves in heterogeneous media and to develop new applications, in particular time reversal in the three-dimensional Earth. Generalizing the scalar approach of Draeger and Fink (1999), the theoretical understanding of time-reversal method can be addressed for the 3D- elastic Earth by using normal mode theory. It is shown how to relate time- reversal methods on one hand, with auto-correlation of seismograms for source imaging and on the other hand, with cross-correlation between receivers for structural imaging and retrieving Green function. The loss of information will be discussed. In the case of source imaging, automatic location in time and space of earthquakes and unknown sources is obtained by time reversal technique. In the case of big earthquakes such as the Sumatra-Andaman earthquake of december 2004, we were able to reconstruct the spatio-temporal history of the rupture. We present here some new applications at the global scale of these techniques on synthetic tests and on real data.
Grossman, Mark W.; George, William A.; Pai, Robert Y.
1985-01-01
A technique for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support.
Computer simulation of reconstructed image for computer-generated holograms
NASA Astrophysics Data System (ADS)
Yasuda, Tomoki; Kitamura, Mitsuru; Watanabe, Masachika; Tsumuta, Masato; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
This report presents the results of computer simulation images for image-type Computer-Generated Holograms (CGHs) observable under white light fabricated with an electron beam lithography system. The simulated image is obtained by calculating wavelength and intensity of diffracted light traveling toward the viewing point from the CGH. Wavelength and intensity of the diffracted light are calculated using FFT image generated from interference fringe data. Parallax image of CGH corresponding to the viewing point can be easily obtained using this simulation method. Simulated image from interference fringe data was compared with reconstructed image of real CGH with an Electron Beam (EB) lithography system. According to the result, the simulated image resembled the reconstructed image of the CGH closely in shape, parallax, coloring and shade. And, in accordance with the shape of the light sources the simulated images which were changed in chroma saturation and blur by using two kinds of simulations: the several light sources method and smoothing method. In addition, as the applications of the CGH, full-color CGH and CGH with multiple images were simulated. The result was that the simulated images of those CGHs closely resembled the reconstructed image of real CGHs.
NASA Astrophysics Data System (ADS)
Wei, Qingyang; Ma, Tianyu; Wang, Shi; Liu, Yaqiang; Gu, Yu; Dai, Tiantian
2016-11-01
Positron emission tomography/computed tomography (PET/CT) is an important tool for clinical studies and pre-clinical researches which provides both functional and anatomical images. To achieve high quality co-registered PET/CT images, alignment calibration of PET and CT scanner is a critical procedure. The existing methods reported use positron source phantoms imaged both by PET and CT scanner and then derive the transformation matrix from the reconstructed images of the two modalities. In this paper, a novel PET/CT alignment calibration method with a non-radioactive phantom and the intrinsic 176Lu radiation of the PET detector was developed. Firstly, a multi-tungsten-alloy-sphere phantom without positron source was designed and imaged by CT and the PET scanner using intrinsic 176Lu radiation included in LYSO. Secondly, the centroids of the spheres were derived and matched by an automatic program. Lastly, the rotation matrix and the translation vector were calculated by least-square fitting of the centroid data. The proposed method was employed in an animal PET/CT system (InliView-3000) developed in our lab. Experimental results showed that the proposed method achieves high accuracy and is feasible to replace the conventional positron source based methods.
Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael
2018-06-01
To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Motionless phase stepping in X-ray phase contrast imaging with a compact source
Miao, Houxun; Chen, Lei; Bennett, Eric E.; Adamo, Nick M.; Gomella, Andrew A.; DeLuca, Alexa M.; Patel, Ajay; Morgan, Nicole Y.; Wen, Han
2013-01-01
X-ray phase contrast imaging offers a way to visualize the internal structures of an object without the need to deposit significant radiation, and thereby alleviate the main concern in X-ray diagnostic imaging procedures today. Grating-based differential phase contrast imaging techniques are compatible with compact X-ray sources, which is a key requirement for the majority of clinical X-ray modalities. However, these methods are substantially limited by the need for mechanical phase stepping. We describe an electromagnetic phase-stepping method that eliminates mechanical motion, thus removing the constraints in speed, accuracy, and flexibility. The method is broadly applicable to both projection and tomography imaging modes. The transition from mechanical to electromagnetic scanning should greatly facilitate the translation of X-ray phase contrast techniques into mainstream applications. PMID:24218599
Ion source and beam guiding studies for an API neutron generator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sy, A.; Ji, Q.; Persaud, A.
2013-04-19
Recently developed neutron imaging methods require high neutron yields for fast imaging times and small beam widths for good imaging resolution. For ion sources with low current density to be viable for these types of imaging methods, large extraction apertures and beam focusing must be used. We present recent work on the optimization of a Penning-type ion source for neutron generator applications. Two multi-cusp magnet configurations have been tested and are shown to increase the extracted ion current density over operation without multi-cusp magnetic fields. The use of multi-cusp magnetic confinement and gold electrode surfaces have resulted in increased ionmore » current density, up to 2.2 mA/cm{sup 2}. Passive beam focusing using tapered dielectric capillaries has been explored due to its potential for beam compression without the cost and complexity issues associated with active focusing elements. Initial results from first experiments indicate the possibility of beam compression. Further work is required to evaluate the viability of such focusing methods for associated particle imaging (API) systems.« less
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki
Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less
Open-source image registration for MRI-TRUS fusion-guided prostate interventions.
Fedorov, Andriy; Khallaghi, Siavash; Sánchez, C Antonio; Lasso, Andras; Fels, Sidney; Tuncali, Kemal; Sugar, Emily Neubauer; Kapur, Tina; Zhang, Chenxi; Wells, William; Nguyen, Paul L; Abolmaesumi, Purang; Tempany, Clare
2015-06-01
We propose two software tools for non-rigid registration of MRI and transrectal ultrasound (TRUS) images of the prostate. Our ultimate goal is to develop an open-source solution to support MRI-TRUS fusion image guidance of prostate interventions, such as targeted biopsy for prostate cancer detection and focal therapy. It is widely hypothesized that image registration is an essential component in such systems. The two non-rigid registration methods are: (1) a deformable registration of the prostate segmentation distance maps with B-spline regularization and (2) a finite element-based deformable registration of the segmentation surfaces in the presence of partial data. We evaluate the methods retrospectively using clinical patient image data collected during standard clinical procedures. Computation time and Target Registration Error (TRE) calculated at the expert-identified anatomical landmarks were used as quantitative measures for the evaluation. The presented image registration tools were capable of completing deformable registration computation within 5 min. Average TRE was approximately 3 mm for both methods, which is comparable with the slice thickness in our MRI data. Both tools are available under nonrestrictive open-source license. We release open-source tools that may be used for registration during MRI-TRUS-guided prostate interventions. Our tools implement novel registration approaches and produce acceptable registration results. We believe these tools will lower the barriers in development and deployment of interventional research solutions and facilitate comparison with similar tools.
Exploiting Fission Chain Reaction Dynamics to Image Fissile Materials
NASA Astrophysics Data System (ADS)
Chapman, Peter Henry
Radiation imaging is one potential method to verify nuclear weapons dismantlement. The neutron coded aperture imager (NCAI), jointly developed by Oak Ridge National Laboratory (ORNL) and Sandia National Laboratories (SNL), is capable of imaging sources of fast (e.g., fission spectrum) neutrons using an array of organic scintillators. This work presents a method developed to discriminate between non-multiplying (i.e., non-fissile) neutron sources and multiplying (i.e., fissile) neutron sources using the NCAI. This method exploits the dynamics of fission chain-reactions; it applies time-correlated pulse-height (TCPH) analysis to identify neutrons in fission chain reactions. TCPH analyzes the neutron energy deposited in the organic scintillator vs. the apparent neutron time-of-flight. Energy deposition is estimated from light output, and time-of-flight is estimated from the time between the neutron interaction and the immediately preceding gamma interaction. Neutrons that deposit more energy than can be accounted for by their apparent time-of-flight are identified as fission chain-reaction neutrons, and the image is reconstructed using only these neutron detection events. This analysis was applied to measurements of weapons-grade plutonium (WGPu) metal and 252Cf performed at the Nevada National Security Site (NNSS) Device Assembly Facility (DAF) in July 2015. The results demonstrate it is possible to eliminate the non-fissile 252Cf source from the image while preserving the fissileWGPu source. TCPH analysis was also applied to additional scenes in which theWGPu and 252Cf sources were measured individually. The results of these separate measurements further demonstrate the ability to remove the non-fissile 252Cf source and retain the fissileWGPu source. Simulations performed using MCNPX-PoliMi indicate that in a one hour measurement, solid spheres ofWGPu are retained at a 1sigma level for neutron multiplications M -˜ 3.0 and above, while hollowWGPu spheres are retained for M -˜ 2.7 and above.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Ping Gong; Pengfei Song; Shigao Chen
2017-06-01
The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.
Center determination for trailed sources in astronomical observation images
NASA Astrophysics Data System (ADS)
Du, Jun Ju; Hu, Shao Ming; Chen, Xu; Guo, Di Fu
2014-11-01
Images with trailed sources can be obtained when observing near-Earth objects, such as small astroids, space debris, major planets and their satellites, no matter the telescopes track on sidereal speed or the speed of target. The low centering accuracy of these trailed sources is one of the most important sources of the astrometric uncertainty, but how to determine the central positions of the trailed sources accurately remains a significant challenge to image processing techniques, especially in the study of faint or fast moving objects. According to the conditions of one-meter telescope at Weihai Observatory of Shandong University, moment and point-spread-function (PSF) fitting were chosen to develop the image processing pipeline for space debris. The principles and the implementations of both two methods are introduced in this paper. And some simulated images containing trailed sources are analyzed with each technique. The results show that two methods are comparable to obtain the accurate central positions of trailed sources when the signal to noise (SNR) is high. But moment tends to fail for the objects with low SNR. Compared with moment, PSF fitting seems to be more robust and versatile. However, PSF fitting is quite time-consuming. Therefore, if there are enough bright stars in the field, or the high astronometric accuracy is not necessary, moment is competent. Otherwise, the combination of moment and PSF fitting is recommended.
Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging
NASA Astrophysics Data System (ADS)
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-04-01
The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.
Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging
Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.
2014-01-01
The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402
Zhou, Lian; Li, Xu; Zhu, Shanan; He, Bin
2011-01-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) was recently introduced as a noninvasive electrical conductivity imaging approach with high spatial resolution close to ultrasound imaging. In the present study, we test the feasibility of the MAT-MI method for breast tumor imaging using numerical modeling and computer simulation. Using the finite element method, we have built three dimensional numerical breast models with varieties of embedded tumors for this simulation study. In order to obtain an accurate and stable forward solution that does not have numerical errors caused by singular MAT-MI acoustic sources at conductivity boundaries, we first derive an integral forward method for calculating MAT-MI acoustic sources over the entire imaging volume. An inverse algorithm for reconstructing the MAT-MI acoustic source is also derived with spherical measurement aperture, which simulates a practical setup for breast imaging. With the numerical breast models, we have conducted computer simulations under different imaging parameter setups and all the results suggest that breast tumors that have large conductivity contrast to its surrounding tissues as reported in literature may be readily detected in the reconstructed MAT-MI images. In addition, our simulations also suggest that the sensitivity of imaging breast tumors using the presented MAT-MI setup depends more on the tumor location and the conductivity contrast between the tumor and its surrounding tissues than on the tumor size. PMID:21364262
Profile fitting in crowded astronomical images
NASA Astrophysics Data System (ADS)
Manish, Raja
Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.
Three-dimensional digital projection in neurosurgical education: technical note.
Martins, Carolina; Ribas, Eduardo Carvalhal; Rhoton, Albert L; Ribas, Guilherme Carvalhal
2015-10-01
Three-dimensional images have become an important tool in teaching surgical anatomy, and its didactic power is enhanced when combined with 3D surgical images and videos. This paper describes the method used by the last author (G.C.R.) since 2002 to project 3D anatomical and surgical images using a computer source. Projecting 3D images requires the superposition of 2 similar but slightly different images of the same object. The set of images, one mimicking the view of the left eye and the other mimicking the view of the right eye, constitute the stereoscopic pair and can be processed using anaglyphic or horizontal-vertical polarization of light for individual use or presentation to larger audiences. Classically, 3D projection could be obtained by using a double set of slides, projected through 2 slide projectors, each of them equipped with complementary filters, shooting over a medium that keeps light polarized (a silver screen) and having the audience wear appropriate glasses. More recently, a digital method of 3D projection has been perfected. In this method, a personal computer is used as the source of the images, which are arranged in a Microsoft PowerPoint presentation. A beam splitter device is used to connect the computer source to 2 digital, portable projectors. Filters, a silver screen, and glasses are used, similar to the classic method. Among other advantages, this method brings flexibility to 3D presentations by allowing the combination of 3D anatomical and surgical still images and videos. It eliminates the need for using film and film developing, lowering the costs of the process. In using small, powerful digital projectors, this method substitutes for the previous technology, without incurring a loss of quality, and enhances portability.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Lacki, Brian C.; Kochanek, Christopher S.; Stanek, Krzysztof Z.; Inada, Naohisa; Oguri, Masamune
2009-06-01
Difference imaging provides a new way to discover gravitationally lensed quasars because few nonlensed sources will show spatially extended, time variable flux. We test the method on the fields of lens candidates in the Sloan Digital Sky Survey (SDSS) Supernova Survey region from the SDSS Quasar Lens Search (SQLS) and one serendipitously discovered lensed quasar. Starting from 20,536 sources, including 49 SDSS quasars, 32 candidate lenses/lensed images, and one known lensed quasar, we find that 174 sources including 35 SDSS quasars, 16 candidate lenses/lensed images, and the known lensed quasar are nonperiodic variable sources. We can measure the spatial structure of the variable flux for 119 of these variable sources and identify only eight as candidate extended variables, including the known lensed quasar. Only the known lensed quasar appears as a close pair of sources on the difference images. Inspection of the remaining seven suggests they are false positives, and only two were spectroscopically identified quasars. One of the lens candidates from the SQLS survives our cuts, but only as a single image instead of a pair. This indicates a false positive rate of order ~1/4000 for the method, or given our effective survey area of order 0.82 deg2, ~5 per deg2 in the SDSS Supernova Survey. The fraction of quasars not found to be variable and the false positive rate would both fall if we had analyzed the full, later data releases for the SDSS fields. While application of the method to the SDSS is limited by the resolution, depth, and sampling of the survey, several future surveys such as Pan-STARRS, LSST, and SNAP will significantly improve on these limitations.
Image steganalysis using Artificial Bee Colony algorithm
NASA Astrophysics Data System (ADS)
Sajedi, Hedieh
2017-09-01
Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.
Optical Imaging of Ionizing Radiation from Clinical Sources.
Shaffer, Travis M; Drain, Charles Michael; Grimm, Jan
2016-11-01
Nuclear medicine uses ionizing radiation for both in vivo diagnosis and therapy. Ionizing radiation comes from a variety of sources, including x-rays, beam therapy, brachytherapy, and various injected radionuclides. Although PET and SPECT remain clinical mainstays, optical readouts of ionizing radiation offer numerous benefits and complement these standard techniques. Furthermore, for ionizing radiation sources that cannot be imaged using these standard techniques, optical imaging offers a unique imaging alternative. This article reviews optical imaging of both radionuclide- and beam-based ionizing radiation from high-energy photons and charged particles through mechanisms including radioluminescence, Cerenkov luminescence, and scintillation. Therapeutically, these visible photons have been combined with photodynamic therapeutic agents preclinically for increasing therapeutic response at depths difficult to reach with external light sources. Last, new microscopy methods that allow single-cell optical imaging of radionuclides are reviewed. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Fard, Ali M.; Gardecki, Joseph A.; Ughi, Giovanni J.; Hyun, Chulho; Tearney, Guillermo J.
2016-02-01
Intravascular optical coherence tomography (OCT) is a high-resolution catheter-based imaging method that provides three-dimensional microscopic images of coronary artery in vivo, facilitating coronary artery disease treatment decisions based on detailed morphology. Near-infrared spectroscopy (NIRS) has proven to be a powerful tool for identification of lipid-rich plaques inside the coronary walls. We have recently demonstrated a dual-modality intravascular imaging technology that integrates OCT and NIRS into one imaging catheter using a two-fiber arrangement and a custom-made dual-channel fiber rotary junction. It therefore enables simultaneous acquisition of microstructural and composition information at 100 frames/second for improved diagnosis of coronary lesions. The dual-modality OCT-NIRS system employs a single wavelength-swept light source for both OCT and NIRS modalities. It subsequently uses a high-speed photoreceiver to detect the NIRS spectrum in the time domain. Although use of one light source greatly simplifies the system configuration, such light source exhibits pulse-to-pulse wavelength and intensity variation due to mechanical scanning of the wavelength. This can be in particular problematic for NIRS modality and sacrifices the reliability of the acquired spectra. In order to address this challenge, here we developed a robust data acquisition and processing method that compensates for the spectral variations of the wavelength-swept light source. The proposed method extracts the properties of the light source, i.e., variation period and amplitude from a reference spectrum and subsequently calibrates the NIRS datasets. We have applied this method on datasets obtained from cadaver human coronary arteries using a polygon-scanning (1230-1350nm) OCT system, operating at 100,000 sweeps per second. The results suggest that our algorithm accurately and robustly compensates the spectral variations and visualizes the dual-modality OCT-NIRS images. These findings are therefore crucial for the practical application and clinical translation of dual-modality intravascular OCT-NIRS imaging when the same swept sources are used for both OCT and spectroscopy.
Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh
2016-09-15
Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.
Optical Imaging of Ionizing Radiation from Clinical Sources
Shaffer, Travis M.; Drain, Charles Michael
2016-01-01
Nuclear medicine uses ionizing radiation for both in vivo diagnosis and therapy. Ionizing radiation comes from a variety of sources, including x-rays, beam therapy, brachytherapy, and various injected radionuclides. Although PET and SPECT remain clinical mainstays, optical readouts of ionizing radiation offer numerous benefits and complement these standard techniques. Furthermore, for ionizing radiation sources that cannot be imaged using these standard techniques, optical imaging offers a unique imaging alternative. This article reviews optical imaging of both radionuclide- and beam-based ionizing radiation from high-energy photons and charged particles through mechanisms including radioluminescence, Cerenkov luminescence, and scintillation. Therapeutically, these visible photons have been combined with photodynamic therapeutic agents preclinically for increasing therapeutic response at depths difficult to reach with external light sources. Last, new microscopy methods that allow single-cell optical imaging of radionuclides are reviewed. PMID:27688469
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
Phase contrast imaging using a micro focus x-ray source
NASA Astrophysics Data System (ADS)
Zhou, Wei; Majidi, Keivan; Brankov, Jovan G.
2014-09-01
Phase contrast x-ray imaging, a new technique to increase the imaging contrast for the tissues with close attenuation coefficients, has been studied since mid 1990s. This technique reveals the possibility to show the clear details of the soft tissues and tumors in small scale resolution. A compact and low cost phase contrast imaging system using a conventional x-ray source is described in this paper. Using the conventional x-ray source is of great importance, because it provides the possibility to use the method in hospitals and clinical offices. Simple materials and components are used in the setup to keep the cost in a reasonable and affordable range.Tungsten Kα1 line with the photon energy 59.3 keV was used for imaging. Some of the system design details are discussed. The method that was used to stabilize the system is introduced. A chicken thigh bone tissue sample was used for imaging followed by the image quality, image acquisition time and the potential clinical application discussion. High energy x-ray beam can be used in phase contrast imaging. Therefore the radiation dose to the patients can be greatly decreased compared to the traditional x-ray radiography.
Fusion of infrared polarization and intensity images based on improved toggle operator
NASA Astrophysics Data System (ADS)
Zhu, Pan; Ding, Lei; Ma, Xiaoqing; Huang, Zhanhua
2018-01-01
Integration of infrared polarization and intensity images has been a new topic in infrared image understanding and interpretation. The abundant infrared details and target from infrared image and the salient edge and shape information from polarization image should be preserved or even enhanced in the fused result. In this paper, a new fusion method is proposed for infrared polarization and intensity images based on the improved multi-scale toggle operator with spatial scale, which can effectively extract the feature information of source images and heavily reduce redundancy among different scale. Firstly, the multi-scale image features of infrared polarization and intensity images are respectively extracted at different scale levels by the improved multi-scale toggle operator. Secondly, the redundancy of the features among different scales is reduced by using spatial scale. Thirdly, the final image features are combined by simply adding all scales of feature images together, and a base image is calculated by performing mean value weighted method on smoothed source images. Finally, the fusion image is obtained by importing the combined image features into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method obtains better performance in preserving the details and edge information as well as improving the image contrast.
NASA Technical Reports Server (NTRS)
Shulman, A. R. (Inventor)
1971-01-01
A method and apparatus for substantially eliminating noise in a coherent energy imaging system, and specifically in a light imaging system of the type having a coherent light source and at least one image lens disposed between an input signal plane and an output image plane are, discussed. The input signal plane is illuminated with the light source by rotating the lens about its optical axis. In this manner, the energy density of coherent noise diffraction patterns as produced by imperfections such as dust and/or bubbles on and/or in the lens is distributed over a ring-shaped area of the output image plane and reduced to a point wherein it can be ignored. The spatial filtering capability of the coherent imaging system is not affected by this noise elimination technique.
Image Size Scalable Full-parallax Coloured Three-dimensional Video by Electronic Holography
NASA Astrophysics Data System (ADS)
Sasaki, Hisayuki; Yamamoto, Kenji; Ichihashi, Yasuyuki; Senoh, Takanori
2014-02-01
In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20 fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63 mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.
TH-EF-207A-05: Feasibility of Applying SMEIR Method On Small Animal 4D Cone Beam CT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Y; Zhang, Y; Shao, Y
Purpose: Small animal cone beam CT imaging has been widely used in preclinical research. Due to the higher respiratory rate and heat beats of small animals, motion blurring is inevitable and needs to be corrected in the reconstruction. Simultaneous motion estimation and image reconstruction (SMEIR) method, which uses projection images of all phases, proved to be effective in motion model estimation and able to reconstruct motion-compensated images. We demonstrate the application of SMEIR for small animal 4D cone beam CT imaging by computer simulations on a digital rat model. Methods: The small animal CBCT imaging system was simulated with themore » source-to-detector distance of 300 mm and the source-to-object distance of 200 mm. A sequence of rat phantom were generated with 0.4 mm{sup 3} voxel size. The respiratory cycle was taken as 1.0 second and the motions were simulated with a diaphragm motion of 2.4mm and an anterior-posterior expansion of 1.6 mm. The projection images were calculated using a ray-tracing method, and 4D-CBCT were reconstructed using SMEIR and FDK methods. The SMEIR method iterates over two alternating steps: 1) motion-compensated iterative image reconstruction by using projections from all respiration phases and 2) motion model estimation from projections directly through a 2D-3D deformable registration of the image obtained in the first step to projection images of other phases. Results: The images reconstructed using SMEIR method reproduced the features in the original phantom. Projections from the same phase were also reconstructed using FDK method. Compared with the FDK results, the images from SMEIR method substantially improve the image quality with minimum artifacts. Conclusion: We demonstrate that it is viable to apply SMEIR method to reconstruct small animal 4D-CBCT images.« less
Imaging strategies using focusing functions with applications to a North Sea field
NASA Astrophysics Data System (ADS)
da Costa Filho, C. A.; Meles, G. A.; Curtis, A.; Ravasi, M.; Kritski, A.
2018-04-01
Seismic methods are used in a wide variety of contexts to investigate subsurface Earth structures, and to explore and monitor resources and waste-storage reservoirs in the upper ˜100 km of the Earth's subsurface. Reverse-time migration (RTM) is one widely used seismic method which constructs high-frequency images of subsurface structures. Unfortunately, RTM has certain disadvantages shared with other conventional single-scattering-based methods, such as not being able to correctly migrate multiply scattered arrivals. In principle, the recently developed Marchenko methods can be used to migrate all orders of multiples correctly. In practice however, using Marchenko methods are costlier to compute than RTM—for a single imaging location, the cost of performing the Marchenko method is several times that of standard RTM, and performing RTM itself requires dedicated use of some of the largest computers in the world for individual data sets. A different imaging strategy is therefore required. We propose a new set of imaging methods which use so-called focusing functions to obtain images with few artifacts from multiply scattered waves, while greatly reducing the number of points across the image at which the Marchenko method need be applied. Focusing functions are outputs of the Marchenko scheme: they are solutions of wave equations that focus in time and space at particular surface or subsurface locations. However, they are mathematical rather than physical entities, being defined only in reference media that equal to the true Earth above their focusing depths but are homogeneous below. Here, we use these focusing functions as virtual source/receiver surface seismic surveys, the upgoing focusing function being the virtual received wavefield that is created when the downgoing focusing function acts as a spatially distributed source. These source/receiver wavefields are used in three imaging schemes: one allows specific individual reflectors to be selected and imaged. The other two schemes provide either targeted or complete images with distinct advantages over current RTM methods, such as fewer artifacts and artifacts that occur in different locations. The latter property allows the recently published `combined imaging' method to remove almost all artifacts. We show several examples to demonstrate the methods: acoustic 1-D and 2-D synthetic examples, and a 2-D line from an ocean bottom cable field data set. We discuss an extension to elastic media, which is illustrated by a 1.5-D elastic synthetic example.
Grossman, M.W.; George, W.A.; Pai, R.Y.
1985-08-13
A technique is disclosed for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support. 6 figs.
Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack
NASA Astrophysics Data System (ADS)
Nalegaev, S. S.; Petrov, N. V.
Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.
Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou
2017-03-15
High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.
Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles
2013-09-23
The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.
NASA Astrophysics Data System (ADS)
Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang
2016-10-01
We report a new method, polarization parameters indirect microscopic imaging with a high transmission infrared light source, to detect the morphology and component of human skin. A conventional reflection microscopic system is used as the basic optical system, into which a polarization-modulation mechanics is inserted and a high transmission infrared light source is utilized. The near-field structural characteristics of human skin can be delivered by infrared waves and material coupling. According to coupling and conduction physics, changes of the optical wave parameters can be calculated and curves of the intensity of the image can be obtained. By analyzing the near-field polarization parameters in nanoscale, we can finally get the inversion images of human skin. Compared with the conventional direct optical microscope, this method can break diffraction limit and achieve a super resolution of sub-100nm. Besides, the method is more sensitive to the edges, wrinkles, boundaries and impurity particles.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Devices, systems, and methods for imaging
Appleby, David; Fraser, Iain; Watson, Scott
2008-04-15
Certain exemplary embodiments comprise a system, which can comprise an imaging plate. The imaging plate can be exposable by an x-ray source. The imaging plate can be configured to be used in digital radiographic imaging. The imaging plate can comprise a phosphor-based image storage device configured to convert an image stored therein into light.
NASA Astrophysics Data System (ADS)
Pena-Verdeal, Hugo; Garcia-Resua, Carlos; Yebra-Pimentel, Eva; Giraldez, Maria J.
2017-08-01
Purpose: Different lower tear meniscus parameters can be clinical assessed on dry eye diagnosis. The aim of this study was to propose and analyse the variability of a semi-automatic method for measuring lower tear meniscus central area (TMCA) by using open source software. Material and methods: On a group of 105 subjects, one video of the lower tear meniscus after fluorescein instillation was generated by a digital camera attached to a slit-lamp. A short light beam (3x5 mm) with moderate illumination in the central portion of the meniscus (6 o'clock) was used. Images were extracted from each video by a masked observer. By using an open source software based on Java (NIH ImageJ), a further observer measured in a masked and randomized order the TMCA in the short light beam illuminated area by two methods: (1) manual method, where TMCA images was "manually" measured; (2) semi-automatic method, where TMCA images were transformed in an 8-bit-binary image, then holes inside this shape were filled and on the isolated shape, the area size was obtained. Finally, both measurements, manual and semi-automatic, were compared. Results: Paired t-test showed no statistical difference between both techniques results (p = 0.102). Pearson correlation between techniques show a significant positive near to perfect correlation (r = 0.99; p < 0.001). Conclusions: This study showed a useful tool to objectively measure the frontal central area of the meniscus in photography by free open source software.
Development and validation of an open source quantification tool for DSC-MRI studies.
Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J
2015-03-01
This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.
Obtaining the phase in the star test using genetic algorithms
NASA Astrophysics Data System (ADS)
Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro
2004-10-01
The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.
NASA Astrophysics Data System (ADS)
Weng, Jiawen; Clark, David C.; Kim, Myung K.
2016-05-01
A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.
A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding
Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong
2016-01-01
During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process. PMID:27649173
A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding.
Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong
2016-09-13
During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process.
Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo
2015-01-01
Purpose: Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981–5005 (2013)]. Methods: A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11–0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. Results: For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors’ previous experimental measurements using source-based skull corrections O’Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285–1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood–brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Conclusions: Taken together, these results demonstrate the feasibility of using the method to guide bubble-mediated ultrasound therapies in the brain. The technique may also have application in ultrasound-based cerebral angiography. PMID:26133635
Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI
NASA Astrophysics Data System (ADS)
Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.
2015-09-01
We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).
3-dimensional imaging system using crystal diffraction lenses
Smither, R.K.
1999-02-09
A device for imaging a plurality of sources of x-ray and gamma-ray radiation is provided. Diffracting crystals are used for focusing the radiation and directing the radiation to a detector which is used for analyzing their addition to collect data as to the location of the source of radiation. A computer is used for converting the data to an image. The invention also provides for a method for imaging x-ray and gamma radiation by supplying a plurality of sources of radiation; focusing the radiation onto a detector; analyzing the focused radiation to collect data as to the type and location of the radiation; and producing an image using the data. 18 figs.
High power THz sources for nonlinear imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekavec, Patrick F.; Kozlov, Vladimir G.
2014-02-18
Many biological and chemical compounds have unique absorption features in the THz (0.1 - 10 THz) region, making the use of THz waves attractive for imaging in defense, security, biomedical imaging, and monitoring of industrial processes. Unlike optical radiation, THz frequencies can pass through many substances such as paper, clothing, ceramic, etc. with little attenuation. The use of currently available THz systems is limited by lack of highpower, sources as well as sensitive detectors and detector arrays operating at room temperature. Here we present a novel, high power THz source based on intracavity downconverison of optical pulses. The source deliversmore » 6 ps pulses at 1.5 THz, with an average power of >300 μW and peak powers >450 mW. We propose an imaging method based on frequency upconverison that is ideally suited to use the narrow bandwidth and high peak powers produced by the source. By upconverting the THz image to the infrared, commercially available detectors can be used for real time imaging.« less
High power THz sources for nonlinear imaging
NASA Astrophysics Data System (ADS)
Tekavec, Patrick F.; Kozlov, Vladimir G.
2014-02-01
Many biological and chemical compounds have unique absorption features in the THz (0.1 - 10 THz) region, making the use of THz waves attractive for imaging in defense, security, biomedical imaging, and monitoring of industrial processes. Unlike optical radiation, THz frequencies can pass through many substances such as paper, clothing, ceramic, etc. with little attenuation. The use of currently available THz systems is limited by lack of highpower, sources as well as sensitive detectors and detector arrays operating at room temperature. Here we present a novel, high power THz source based on intracavity downconverison of optical pulses. The source delivers 6 ps pulses at 1.5 THz, with an average power of >300 μW and peak powers >450 mW. We propose an imaging method based on frequency upconverison that is ideally suited to use the narrow bandwidth and high peak powers produced by the source. By upconverting the THz image to the infrared, commercially available detectors can be used for real time imaging.
Combination of acoustical radiosity and the image source method.
Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn
2013-06-01
A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy.
Imaging Young Stellar Objects with VLTi/PIONIER
NASA Astrophysics Data System (ADS)
Kluska, J.; Malbet, F.; Berger, J.-P.; Benisty, M.; Lazareff, B.; Le Bouquin, J.-B.; Baron, F.; Dominik, C.; Isella, A.; Juhasz, A.; Kraus, S.; Lachaume, R.; Ménard, F.; Millan-Gabet, R.; Monnier, J.; Pinte, C.; Soulez, F.; Tallon, M.; Thi, W.-F.; Thiébaut, É.; Zins, G.
2014-04-01
Optical interferometry imaging is designed to help us to reveal complex astronomical sources without a prior model. Among these complex objects are the young stars and their environments, which have a typical morphology with a point-like source, surrounded by circumstellar material with unknown morphology. To image them, we have developed a numerical method that removes completely the stellar point source and reconstructs the rest of the image, using the differences in the spectral behavior between the star and its circumstellar material. We aim to reveal the first Astronomical Units of these objects where many physical phenomena could interplay: the dust sublimation causing a puffed-up inner rim, a dusty halo, a dusty wind or an inner gaseous component. To investigate more deeply these regions, we carried out the first Large Program survey of HAeBe stars with two main goals: statistics on the geometry of these objects at the first astronomical unit scale and imaging their very close environment. The images reveal the environment, which is not polluted by the star and allows us to derive the best fit for the flux ratio and the spectral slope. We present the first images from this survey and the application of the imaging method on other astronomical objects.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A
2013-01-01
A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozsef, G
Purpose: To build a test device for HDR afterloaders capable of checking source positions, times at positions and estimate the activity of the source. Methods: A catheter is taped on a plastic scintillation sheet. When a source travels through the catheter, the scintillator sheet lights up around the source. The sheet is monitored with a video camera, and records the movement of the light spot. The center of the spot on each image on the video provides the source location, and the time stamps of the images can provide the dwell time the source spend in each location. Finally, themore » brightness of the light spot is related to the activity of the source. A code was developed for noise removal, calibrate the scale of the image to centimeters, eliminate the distortion caused by the oblique view angle, identifying the boundaries of the light spot, transforming the image into binary and detect and calculate the source motion, positions and times. The images are much less noisy if the camera is shielded. That requires that the light spot is monitored in a mirror, rather than directly. The whole assembly is covered from external light and has a size of approximately 17×35×25cm (H×L×W) Results: A cheap camera in BW mode proved to be sufficient with a plastic scintillator sheet. The best images were resulted by a 3mm thick sheet with ZnS:Ag surface coating. The shielding of the camera decreased the noise, but could not eliminate it. A test run even in noisy condition resulted in approximately 1 mm and 1 sec difference from the planned positions and dwell times. Activity tests are in progress. Conclusion: The proposed method is feasible. It might simplify the monthly QA process of HDR Brachytherapy units.« less
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
NASA Astrophysics Data System (ADS)
Lin, Ye; Zhang, Haijiang; Jia, Xiaofeng
2018-03-01
For microseismic monitoring of hydraulic fracturing, microseismic migration can be used to image the fracture network with scattered microseismic waves. Compared with conventional microseismic location-based fracture characterization methods, microseismic migration can better constrain the stimulated reservoir volume regardless of the completeness of detected and located microseismic sources. However, the imaging results from microseismic migration may suffer from the contamination of other structures and thus the target fracture zones may not be illuminated properly. To solve this issue, in this study we propose a target-oriented staining algorithm for microseismic reverse-time migration. In the staining algorithm, the target area is first stained by constructing an imaginary velocity field and then a synchronized source wavefield only concerning the target structure is produced. As a result, a synchronized image from imaging with the synchronized source wavefield mainly contains the target structures. Synthetic tests based on a downhole microseismic monitoring system show that the target-oriented microseismic reverse-time migration method improves the illumination of target areas.
NASA Astrophysics Data System (ADS)
Tourin, A.; Fink, M.
2010-12-01
The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing the resulting HK integral along S, physical arguments can be used to straightforwardly predict that the time-reversed field in the cavity writes as the difference of advanced and retarded Green’s functions centred on the initial source position. This result is in some way disappointing because it means that reversing a field using a closed TRM is not enough to realize a perfect time-reversal experiment. In practical applications, the converging wave is always followed by a diverging one (see figure). However we will show that this result is of great importance since it furnishes the basis for imaging methods in media with no active source. We will focus more especially on the virtual source method showing that it can be used for implementing the DORT method (Decomposition of the time reversal operator) in a passive way. The passive DORT method could be interesting for monitoring changes in a complex scattering medium, for example in the context of CO2 storage. Time-reversal imaging applied to the giant Sumatra earthquake
LED-based endoscopic light source for spectral imaging
NASA Astrophysics Data System (ADS)
Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.
2016-03-01
Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.
Improved Ultrasonic Imaging of the Breast
2003-08-01
benign and malignant masses often exhibit only subtle image differences. We have invented a new technique that uses modified ultrasound equipment to form images of ultrasonic angular scatter. This method provides a new source of image contrast and should enhance the detectability of MCs and improve the differentiation of benign and malignant lesions. This method yields high resolution images with minimal statistical variability. In this first year 0 funding, we have formed images in tissue mimicking phantoms and found that
Time reversal imaging, Inverse problems and Adjoint Tomography}
NASA Astrophysics Data System (ADS)
Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.
2010-12-01
With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Imaging alpha particle detector
Anderson, David F.
1985-01-01
A method and apparatus for detecting and imaging alpha particles sources is described. A conducting coated high voltage electrode (1) and a tungsten wire grid (2) constitute a diode configuration discharge generator for electrons dislodged from atoms or molecules located in between these electrodes when struck by alpha particles from a source (3) to be quantitatively or qualitatively analyzed. A thin polyester film window (4) allows the alpha particles to pass into the gas enclosure and the combination of the glass electrode, grid and window is light transparent such that the details of the source which is imaged with high resolution and sensitivity by the sparks produced can be observed visually as well. The source can be viewed directly, electronically counted or integrated over time using photographic methods. A significant increase in sensitivity over other alpha particle detectors is observed, and the device has very low sensitivity to gamma or beta emissions which might otherwise appear as noise on the alpha particle signal.
Imaging alpha particle detector
Anderson, D.F.
1980-10-29
A method and apparatus for detecting and imaging alpha particles sources is described. A dielectric coated high voltage electrode and a tungsten wire grid constitute a diode configuration discharge generator for electrons dislodged from atoms or molecules located in between these electrodes when struck by alpha particles from a source to be quantitatively or qualitatively analyzed. A thin polyester film window allows the alpha particles to pass into the gas enclosure and the combination of the glass electrode, grid and window is light transparent such that the details of the source which is imaged with high resolution and sensitivity by the sparks produced can be observed visually as well. The source can be viewed directly, electronically counted or integrated over time using photographic methods. A significant increase in sensitivity over other alpha particle detectors is observed, and the device has very low sensitivity to gamma or beta emissions which might otherwise appear as noise on the alpha particle signal.
Harmonic source wavefront aberration correction for ultrasound imaging
Dianis, Scott W.; von Ramm, Olaf T.
2011-01-01
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Image transmission system using adaptive joint source and channel decoding
NASA Astrophysics Data System (ADS)
Liu, Weiliang; Daut, David G.
2005-03-01
In this paper, an adaptive joint source and channel decoding method is designed to accelerate the convergence of the iterative log-dimain sum-product decoding procedure of LDPC codes as well as to improve the reconstructed image quality. Error resilience modes are used in the JPEG2000 source codec, which makes it possible to provide useful source decoded information to the channel decoder. After each iteration, a tentative decoding is made and the channel decoded bits are then sent to the JPEG2000 decoder. Due to the error resilience modes, some bits are known to be either correct or in error. The positions of these bits are then fed back to the channel decoder. The log-likelihood ratios (LLR) of these bits are then modified by a weighting factor for the next iteration. By observing the statistics of the decoding procedure, the weighting factor is designed as a function of the channel condition. That is, for lower channel SNR, a larger factor is assigned, and vice versa. Results show that the proposed joint decoding methods can greatly reduce the number of iterations, and thereby reduce the decoding delay considerably. At the same time, this method always outperforms the non-source controlled decoding method up to 5dB in terms of PSNR for various reconstructed images.
NASA Astrophysics Data System (ADS)
Poudel, Joemini; Matthews, Thomas P.; Mitsuhashi, Kenji; Garcia-Uribe, Alejandro; Wang, Lihong V.; Anastasio, Mark A.
2017-03-01
Photoacoustic computed tomography (PACT) is an emerging computed imaging modality that exploits optical contrast and ultrasonic detection principles to form images of the photoacoustically induced initial pressure distribution within tissue. The PACT reconstruction problem corresponds to a time-domain inverse source problem, where the initial pressure distribution is recovered from the measurements recorded on an aperture outside the support of the source. A major challenge in transcranial PACT brain imaging is to compensate for aberrations in the measured data due to the propagation of the photoacoustic wavefields through the skull. To properly account for these effects, a wave equation-based inversion method should be employed that can model the heterogeneous elastic properties of the medium. In this study, an iterative image reconstruction method for 3D transcranial PACT is developed based on the elastic wave equation. To accomplish this, a forward model based on a finite-difference time-domain discretization of the elastic wave equation is established. Subsequently, gradient-based methods are employed for computing penalized least squares estimates of the initial source distribution that produced the measured photoacoustic data. The developed reconstruction algorithm is validated and investigated through computer-simulation studies.
Zhu, Chengcheng; Patterson, Andrew J; Thomas, Owen M; Sadat, Umar; Graves, Martin J; Gillard, Jonathan H
2013-04-01
Luminal stenosis is used for selecting the optimal management strategy for patients with carotid artery disease. The aim of this study is to evaluate the reproducibility of carotid stenosis quantification using manual and automated segmentation methods using submillimeter through-plane resolution Multi-Detector CT angiography (MDCTA). 35 patients having carotid artery disease with >30 % luminal stenosis as identified by carotid duplex imaging underwent contrast enhanced MDCTA. Two experienced CT readers quantified carotid stenosis from axial source images, reconstructed maximum intensity projection (MIP) and 3D-carotid geometry which was automatically segmented by an open-source toolkit (Vascular Modelling Toolkit, VMTK) using NASCET criteria. Good agreement among the measurement using axial images, MIP and automatic segmentation was observed. Automatic segmentation methods show better inter-observer agreement between the readers (intra-class correlation coefficient (ICC): 0.99 for diameter stenosis measurement) than manual measurement of axial (ICC = 0.82) and MIP (ICC = 0.86) images. Carotid stenosis quantification using an automatic segmentation method has higher reproducibility compared with manual methods.
A wavelet-based adaptive fusion algorithm of infrared polarization imaging
NASA Astrophysics Data System (ADS)
Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang
2011-08-01
The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golshan, Maryam, E-mail: maryam.golshan@bccancer.bc.ca; Spadinger, Ingrid; Chng, Nick
2016-06-15
Purpose: Current methods of low dose rate brachytherapy source strength verification for sources preloaded into needles consist of either assaying a small number of seeds from a separate sample belonging to the same lot used to load the needles or performing batch assays of a subset of the preloaded seed trains. Both of these methods are cumbersome and have the limitations inherent to sampling. The purpose of this work was to investigate an alternative approach that uses an image-based, autoradiographic system capable of the rapid and complete assay of all sources without compromising sterility. Methods: The system consists of amore » flat panel image detector, an autoclavable needle holder, and software to analyze the detected signals. The needle holder was designed to maintain a fixed vertical spacing between the needles and the image detector, and to collimate the emissions from each seed. It also provides a sterile barrier between the needles and the imager. The image detector has a sufficiently large image capture area to allow several needles to be analyzed simultaneously.Several tests were performed to assess the accuracy and reproducibility of source strengths obtained using this system. Three different seed models (Oncura 6711 and 9011 {sup 125}I seeds, and IsoAid Advantage {sup 103}Pd seeds) were used in the evaluations. Seeds were loaded into trains with at least 1 cm spacing. Results: Using our system, it was possible to obtain linear calibration curves with coverage factor k = 1 prediction intervals of less than ±2% near the centre of their range for the three source models. The uncertainty budget calculated from a combination of type A and type B estimates of potential sources of error was somewhat larger, yielding (k = 1) combined uncertainties for individual seed readings of 6.2% for {sup 125}I 6711 seeds, 4.7% for {sup 125}I 9011 seeds, and 11.0% for Advantage {sup 103}Pd seeds. Conclusions: This study showed that a flat panel detector dosimetry system is a viable option for source strength verification in preloaded needles, as it is capable of measuring all of the sources intended for implantation. Such a system has the potential to directly and efficiently estimate individual source strengths, the overall mean source strength, and the positions within the seed-spacer train.« less
Exploring three faint source detections methods for aperture synthesis radio images
NASA Astrophysics Data System (ADS)
Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.
2015-04-01
Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.
Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.
2010-01-01
Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.
Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.
Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas
2013-03-01
The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.
From synchrotron radiation to lab source: advanced speckle-based X-ray imaging using abrasive paper
NASA Astrophysics Data System (ADS)
Wang, Hongchang; Kashyap, Yogesh; Sawhney, Kawal
2016-02-01
X-ray phase and dark-field imaging techniques provide complementary and inaccessible information compared to conventional X-ray absorption or visible light imaging. However, such methods typically require sophisticated experimental apparatus or X-ray beams with specific properties. Recently, an X-ray speckle-based technique has shown great potential for X-ray phase and dark-field imaging using a simple experimental arrangement. However, it still suffers from either poor resolution or the time consuming process of collecting a large number of images. To overcome these limitations, in this report we demonstrate that absorption, dark-field, phase contrast, and two orthogonal differential phase contrast images can simultaneously be generated by scanning a piece of abrasive paper in only one direction. We propose a novel theoretical approach to quantitatively extract the above five images by utilising the remarkable properties of speckles. Importantly, the technique has been extended from a synchrotron light source to utilise a lab-based microfocus X-ray source and flat panel detector. Removing the need to raster the optics in two directions significantly reduces the acquisition time and absorbed dose, which can be of vital importance for many biological samples. This new imaging method could potentially provide a breakthrough for numerous practical imaging applications in biomedical research and materials science.
NASA Astrophysics Data System (ADS)
Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi
1995-08-01
This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.
Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.
Koprowski, Robert
2015-11-01
The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
Mariappan, Leo; Li, Xu; He, Bin
2011-01-01
We present in this study an acoustic source reconstruction method using focused transducer with B mode imaging for magnetoacoustic tomography with magnetic induction (MAT-MI). MAT-MI is an imaging modality proposed for non-invasive conductivity imaging with high spatial resolution. In MAT-MI acoustic sources are generated in a conductive object by placing it in a static and a time-varying magnetic field. The acoustic waves from these sources propagate in all directions and are collected with transducers placed around the object. The collected signal is then usedto reconstruct the acoustic source distribution and to further estimate the electrical conductivity distribution of the object. A flat piston transducer acting as a point receiver has been used in previous MAT-MI systems to collect acoustic signals. In the present study we propose to use B mode scan scheme with a focused transducer that gives a signal gain in its focus region and improves the MAT-MI signal quality. A simulation protocol that can take into account different transducer designs and scan schemes for MAT-MI imaging is developed and used in our evaluation of different MAT-MI system designs. It is shown in our computer simulations that, as compared to the previous approach, the MAT-MI system using B-scan with a focused transducer allows MAT-MI imaging at a closer distance and has improved system sensitivity. In addition, the B scan imaging technique allows reconstruction of the MAT-MI acoustic sources with a discrete number of scanning locations which greatly increases the applicability of the MAT-MI approach especially when a continuous acoustic window is not available in real clinical applications. We have also conducted phantom experiments to evaluate the proposed method and the reconstructed image shows a good agreement with the target phantom. PMID:21097372
Intensity correlation imaging with sunlight-like source
NASA Astrophysics Data System (ADS)
Wang, Wentao; Tang, Zhiguo; Zheng, Huaibin; Chen, Hui; Yuan, Yuan; Liu, Jinbin; Liu, Yanyan; Xu, Zhuo
2018-05-01
We show a method of intensity correlation imaging of targets illuminated by a sunlight-like source both theoretically and experimentally. With a Faraday anomalous dispersion optical filter (FADOF), we have modulated the coherence time of a thermal source up to 0.167 ns. And we carried out measurements of temporal and spatial correlations, respectively, with an intensity interferometer setup. By skillfully using the even Fourier fitting on the very sparse sampling data, the images of targets are successfully reconstructed from the low signal-noise-ratio(SNR) interference pattern by applying an iterative phase retrieval algorithm. The resulting imaging quality is as well as the one obtained by the theoretical fitting. The realization of such a case will bring this technique closer to geostationary satellite imaging illuminated by sunlight.
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
Wu, Jian; Murphy, Martin J
2010-06-01
To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
Analysis of spectrally resolved autofluorescence images by support vector machines
NASA Astrophysics Data System (ADS)
Mateasik, A.; Chorvat, D.; Chorvatova, A.
2013-02-01
Spectral analysis of the autofluorescence images of isolated cardiac cells was performed to evaluate and to classify the metabolic state of the cells in respect to the responses to metabolic modulators. The classification was done using machine learning approach based on support vector machine with the set of the automatically calculated features from recorded spectral profile of spectral autofluorescence images. This classification method was compared with the classical approach where the individual spectral components contributing to cell autofluorescence were estimated by spectral analysis, namely by blind source separation using non-negative matrix factorization. Comparison of both methods showed that machine learning can effectively classify the spectrally resolved autofluorescence images without the need of detailed knowledge about the sources of autofluorescence and their spectral properties.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
NASA Astrophysics Data System (ADS)
Kim, Moon S.; Cho, Byoung-Kwan; Yang, Chun-Chieh; Chao, Kaunglin; Lefcourt, Alan M.; Chen, Yud-Ren
2006-10-01
We have developed nondestructive opto-electronic imaging techniques for rapid assessment of safety and wholesomeness of foods. A recently developed fast hyperspectral line-scan imaging system integrated with a commercial apple-sorting machine was evaluated for rapid detection of animal feces matter on apples. Apples obtained from a local orchard were artificially contaminated with cow feces. For the online trial, hyperspectral images with 60 spectral channels, reflectance in the visible to near infrared regions and fluorescence emissions with UV-A excitation, were acquired from apples moving at a processing sorting-line speed of three apples per second. Reflectance and fluorescence imaging required a passive light source, and each method used independent continuous wave (CW) light sources. In this paper, integration of the hyperspectral imaging system with the commercial applesorting machine and preliminary results for detection of fecal contamination on apples, mainly based on the fluorescence method, are presented.
Monolithic focused reference beam X-ray holography
Geilhufe, J.; Pfau, B.; Schneider, M.; Büttner, F.; Günther, C. M.; Werner, S.; Schaffert, S.; Guehrs, E.; Frömmel, S.; Kläui, M.; Eisebitt, S.
2014-01-01
Fourier transform holography is a highly efficient and robust imaging method, suitable for single-shot imaging at coherent X-ray sources. In its common implementation, the image contrast is limited by the reference signal generated by a small pinhole aperture. Increased pinhole diameters improve the signal, whereas the resolution is diminished. Here we report a new concept to decouple the spatial resolution from the image contrast by employing a Fresnel zone plate to provide the reference beam. Superimposed on-axis images of distinct foci are separated with a novel algorithm. Our method is insensitive to mechanical drift or vibrations and allows for long integration times common at low-flux facilities like high harmonic generation sources. The application of monolithic focused reference beams improves the efficiency of high-resolution X-ray Fourier transform holography beyond all present approaches and paves the path towards sub-10 nm single-shot X-ray imaging. PMID:24394675
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
Some selected quantitative methods of thermal image analysis in Matlab.
Koprowski, Robert
2016-05-01
The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kelly, C. L.; Lawrence, J. F.
2014-12-01
During October 2012, 51 geophones and 6 broadband seismometers were deployed in an ~50x50m region surrounding a periodically erupting columnar geyser in the El Tatio Geyser Field, Chile. The dense array served as the seismic framework for a collaborative project to study the mechanics of complex hydrothermal systems. Contemporaneously, complementary geophysical measurements (including down-hole temperature and pressure, discharge rates, thermal imaging, water chemistry, and video) were also collected. Located on the western flanks of the Andes Mountains at an elevation of 4200m, El Tatio is the third largest geyser field in the world. Its non-pristine condition makes it an ideal location to perform minutely invasive geophysical studies. The El Jefe Geyser was chosen for its easily accessible conduit and extremely periodic eruption cycle (~120s). During approximately 2 weeks of continuous recording, we recorded ~2500 nighttime eruptions which lack cultural noise from tourism. With ample data, we aim to study how the source varies spatially and temporally during each phase of the geyser's eruption cycle. We are developing a new back-projection processing technique to improve source imaging for diffuse signals. Our method was previously applied to the Sierra Negra Volcano system, which also exhibits repeating harmonic and diffuse seismic sources. We back-project correlated seismic signals from the receivers back to their sources, assuming linear source to receiver paths and a known velocity model (obtained from ambient noise tomography). We apply polarization filters to isolate individual and concurrent geyser energy associated with P and S phases. We generate 4D, time-lapsed images of the geyser source field that illustrate how the source distribution changes through the eruption cycle. We compare images for pre-eruption, co-eruption, post-eruption and quiescent periods. We use our images to assess eruption mechanics in the system (i.e. top-down vs. bottom-up) and determine variations in source depth and distribution in the conduit and larger geyser field over many eruption cycles.
Passive synthetic aperture radar imaging of ground moving targets
NASA Astrophysics Data System (ADS)
Wacks, Steven; Yazici, Birsen
2012-05-01
In this paper we present a method for imaging ground moving targets using passive synthetic aperture radar. A passive radar imaging system uses small, mobile receivers that do not radiate any energy. For these reasons, passive imaging systems result in signicant cost, manufacturing, and stealth advantages. The received signals are obtained by multiple airborne receivers collecting scattered waves due to illuminating sources of opportunity such as commercial television, radio, and cell phone towers. We describe a novel forward model and a corresponding ltered-backprojection type image reconstruction method combined with entropy optimization. Our method determines the location and velocity of multiple targets moving at dierent velocities. Furthermore, it can accommodate arbitrary imaging geometries. we present numerical simulations to verify the imaging method.
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
NASA Technical Reports Server (NTRS)
Zissa, D. E.; Korsch, D.
1986-01-01
A test method particularly suited for X-ray telescopes was evaluated experimentally. The method makes use of a focused ring formed by an annular aperture when using a point source at a finite distance. This would supplement measurements of the best focus image which is blurred when the test source is at a finite distance. The telescope used was the Technology Mirror Assembly of the Advanced X-ray Astrophysis Facility (AXAF) program. Observed ring image defects could be related to the azimuthal location of their sources in the telescope even though in this case the predicted sharp ring was obscured by scattering, finite source size, and residual figure errors.
Automated detection of extended sources in radio maps: progress from the SCORPIO survey
NASA Astrophysics Data System (ADS)
Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.
2016-08-01
Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.
Multi-Source Learning for Joint Analysis of Incomplete Multi-Modality Neuroimaging Data
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2013-01-01
Incomplete data present serious problems when integrating largescale brain imaging data sets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. We address this problem by proposing two novel learning methods where all the samples (with at least one available data source) can be used. In the first method, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. Our second method learns a base classifier for each data source independently, based on which we represent each source using a single column of prediction scores; we then estimate the missing prediction scores, which, combined with the existing prediction scores, are used to build a multi-source fusion model. To illustrate the proposed approaches, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 Normal), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithms. Comprehensive experiments show that our proposed methods yield stable and promising results. PMID:24014189
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Gregor, Jens; Bingham, Philip R
2014-01-01
At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. Tomore » overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Ryan L., E-mail: ryan.smith@wbrc.org.au; Millar, Jeremy L.; Franich, Rick D.
Purpose: Verification of high dose rate (HDR) brachytherapy treatment delivery is an important step, but is generally difficult to achieve. A technique is required to monitor the treatment as it is delivered, allowing comparison with the treatment plan and error detection. In this work, we demonstrate a method for monitoring the treatment as it is delivered and directly comparing the delivered treatment with the treatment plan in the clinical workspace. This treatment verification system is based on a flat panel detector (FPD) used for both pre-treatment imaging and source tracking. Methods: A phantom study was conducted to establish the resolutionmore » and precision of the system. A pretreatment radiograph of a phantom containing brachytherapy catheters is acquired and registration between the measurement and treatment planning system (TPS) is performed using implanted fiducial markers. The measured catheter paths immediately prior to treatment were then compared with the plan. During treatment delivery, the position of the {sup 192}Ir source is determined at each dwell position by measuring the exit radiation with the FPD and directly compared to the planned source dwell positions. Results: The registration between the two corresponding sets of fiducial markers in the TPS and radiograph yielded a registration error (residual) of 1.0 mm. The measured catheter paths agreed with the planned catheter paths on average to within 0.5 mm. The source positions measured with the FPD matched the planned source positions for all dwells on average within 0.6 mm (s.d. 0.3, min. 0.1, max. 1.4 mm). Conclusions: We have demonstrated a method for directly comparing the treatment plan with the delivered treatment that can be easily implemented in the clinical workspace. Pretreatment imaging was performed, enabling visualization of the implant before treatment delivery and identification of possible catheter displacement. Treatment delivery verification was performed by measuring the source position as each dwell was delivered. This approach using a FPD for imaging and source tracking provides a noninvasive method of acquiring extensive information for verification in HDR prostate brachytherapy.« less
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
Development of a stationary chest tomosynthesis system using carbon nanotube x-ray source array
NASA Astrophysics Data System (ADS)
Shan, Jing
X-ray imaging system has shown its usefulness for providing quick and easy access of imaging in both clinic settings and emergency situations. It greatly improves the workflow in hospitals. However, the conventional radiography systems, lacks 3D information in the images. The tissue overlapping issue in the 2D projection image result in low sensitivity and specificity. Both computed tomography and digital tomosynthesis, the two conventional 3D imaging modalities, requires a complex gantry to mechanically translate the x-ray source to various positions. Over the past decade, our research group has developed a carbon nanotube (CNT) based x-ray source technology. The CNT x-ray sources allows compacting multiple x-ray sources into a single x-ray tube. Each individual x-ray source in the source array can be electronically switched. This technology allows development of stationary tomographic imaging modalities without any complex mechanical gantries. The goal of this work is to develop a stationary digital chest tomosynthesis (s-DCT) system, and implement it for a clinical trial. The feasibility of s-DCT was investigated. It is found that the CNT source array can provide sufficient x-ray output for chest imaging. Phantom images have shown comparable image qualities as conventional DCT. The s-DBT system was then used to study the effects of source array configurations and tomosynthesis image quality, and the feasibility of a physiological gated s-DCT. Using physical measures for spatial resolution, the 2D source configuration was shown to have improved depth resolution and comparable in-plane resolution. The prospective gated tomosynthesis images have shown substantially reduction of image blur associated with lung motions. The system was also used to investigate the feasibility of using s-DCT as a diagnosis and monitoring tools for cystic fibrosis patients. A new scatter reduction methods for s-DCT was also studied. Finally, a s-DCT system was constructed by retrofitting the source array to a Carestream digital radiography system. The system passed the electrical and radiation safety tests, and was installed in Marsico Hall. The patient trial started in March of 2015, and the first patient was successfully imaged.
An Open Source Agenda for Research Linking Text and Image Content Features.
ERIC Educational Resources Information Center
Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi
2001-01-01
Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…
Laser applications and system considerations in ocular imaging
Elsner, Ann E.; Muller, Matthew S.
2009-01-01
We review laser applications for primarily in vivo ocular imaging techniques, describing their constraints based on biological tissue properties, safety, and the performance of the imaging system. We discuss the need for cost effective sources with practical wavelength tuning capabilities for spectral studies. Techniques to probe the pathological changes of layers beneath the highly scattering retina and diagnose the onset of various eye diseases are described. The recent development of several optical coherence tomography based systems for functional ocular imaging is reviewed, as well as linear and nonlinear ocular imaging techniques performed with ultrafast lasers, emphasizing recent source developments and methods to enhance imaging contrast. PMID:21052482
Pinton, Gianmarco F.; Trahey, Gregg E.; Dahl, Jeremy J.
2015-01-01
A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain. This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-and-sum beamforming is used to generate point spread functions (PSFs) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is due to reverberation from near-field structures. Compared with fundamental imaging, reverberation clutter in harmonic imaging is 27.1 dB lower. Simulated tissue with uniform velocity but unchanged impedance characteristics indicates that for harmonic imaging, the primary source of degradation is phase aberration. PMID:21693410
System for uncollimated digital radiography
Wang, Han; Hall, James M.; McCarrick, James F.; Tang, Vincent
2015-08-11
The inversion algorithm based on the maximum entropy method (MEM) removes unwanted effects in high energy imaging resulting from an uncollimated source interacting with a finitely thick scintillator. The algorithm takes as input the image from the thick scintillator (TS) and the radiography setup geometry. The algorithm then outputs a restored image which appears as if taken with an infinitesimally thin scintillator (ITS). Inversion is accomplished by numerically generating a probabilistic model relating the ITS image to the TS image and then inverting this model on the TS image through MEM. This reconstruction technique can reduce the exposure time or the required source intensity without undesirable object blurring on the image by allowing the use of both thicker scintillators with higher efficiencies and closer source-to-detector distances to maximize incident radiation flux. The technique is applicable in radiographic applications including fast neutron, high-energy gamma and x-ray radiography using thick scintillators.
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
Device and Method of Scintillating Quantum Dots for Radiation Imaging
NASA Technical Reports Server (NTRS)
Burke, Eric R. (Inventor); DeHaven, Stanton L. (Inventor); Williams, Phillip A. (Inventor)
2017-01-01
A radiation imaging device includes a radiation source and a micro structured detector comprising a material defining a surface that faces the radiation source. The material includes a plurality of discreet cavities having openings in the surface. The detector also includes a plurality of quantum dots disclosed in the cavities. The quantum dots are configured to interact with radiation from the radiation source, and to emit visible photons that indicate the presence of radiation. A digital camera and optics may be used to capture images formed by the detector in response to exposure to radiation.
Assessment of spatial information for hyperspectral imaging of lesion
NASA Astrophysics Data System (ADS)
Yang, Xue; Li, Gang; Lin, Ling
2016-10-01
Multiple diseases such as breast tumor poses a great threat to women's health and life, while the traditional detection method is complex, costly and unsuitable for frequently self-examination, therefore, an inexpensive, convenient and efficient method for tumor self-inspection is needed urgently, and lesion localization is an important step. This paper proposes an self-examination method for positioning of a lesion. The method adopts transillumination to acquire the hyperspectral images and to assess the spatial information of lesion. Firstly, multi-wavelength sources are modulated with frequency division, which is advantageous to separate images of different wavelength, meanwhile, the source serves as fill light to each other to improve the sensitivity in the low-lightlevel imaging. Secondly, the signal-to-noise ratio of transmitted images after demodulation are improved by frame accumulation technology. Next, gray distributions of transmitted images are analyzed. The gray-level differences is constituted by the actual transmitted images and fitting transmitted images of tissue without lesion, which is to rule out individual differences. Due to scattering effect, there will be transition zones between tissue and lesion, and the zone changes with wavelength change, which will help to identify the structure details of lesion. Finally, image segmentation is adopted to extract the lesion and the transition zones, and the spatial features of lesion are confirmed according to the transition zones and the differences of transmitted light intensity distributions. Experiment using flat-shaped tissue as an example shows that the proposed method can extract the space information of lesion.
SU-F-J-183: Interior Region-Of-Interest Tomography by Using Inverse Geometry System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K; Kim, D; Kang, S
2016-06-15
Purpose: The inverse geometry computed tomography (IGCT) composed of multiple source and small size detector has several merits such as reduction of scatter effect and large volumetric imaging within one rotation without cone-beam artifact, compared to conventional cone-beam computed tomography (CBCT). By using this multi-source characteristics, we intend to present a selective and multiple interior region-of-interest (ROI) imaging method by using a designed source on-off sequence of IGCT. Methods: All of the IGCT sources are operated one by one sequentially, and each projection in the shape of narrow cone-beam covers its own partial volume of full field of view (FOV)more » determined from system geometry. Thus, through controlling multi source operation, limited irradiation within ROI is possible and selective radon space data for ROI imaging can be acquired without additional X-ray filtration. With this feature, we designed a source on-off sequence for multi ROI-IGCT imaging, and projections of ROI-IGCT were generated by using the on-off sequence. Multi ROI-IGCT images were reconstructed by using filtered back-projection algorithm. All these imaging process of our study has been performed by utilizing digital phantom and patient CT data. ROI-IGCT images of the phantom were compared to CBCT image and the phantom data for the image quality evaluation. Results: Image quality of ROI-IGCT was comparable to that of CBCT. However, the distal axial-plane from the FOV center, large cone-angle region, ROI-IGCT showed uniform image quality without significant cone-beam artifact contrary to CBCT. Conclusion: ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. Projection of ROI-IGCT is performed by selective irradiation, hence unnecessary imaging dose to non-interest region can be reduced. In this regard, it seems to be useful for diagnostic or image guidance purpose in radiotherapy such as low dose target localization and patient alignment. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less
NASA Astrophysics Data System (ADS)
Saager, Rolf B.; Baldado, Melissa L.; Rowland, Rebecca A.; Kelly, Kristen M.; Durkin, Anthony J.
2018-04-01
With recent proliferation in compact and/or low-cost clinical multispectral imaging approaches and commercially available components, questions remain whether they adequately capture the requisite spectral content of their applications. We present a method to emulate the spectral range and resolution of a variety of multispectral imagers, based on in-vivo data acquired from spatial frequency domain spectroscopy (SFDS). This approach simulates spectral responses over 400 to 1100 nm. Comparing emulated data with full SFDS spectra of in-vivo tissue affords the opportunity to evaluate whether the sparse spectral content of these imagers can (1) account for all sources of optical contrast present (completeness) and (2) robustly separate and quantify sources of optical contrast (crosstalk). We validate the approach over a range of tissue-simulating phantoms, comparing the SFDS-based emulated spectra against measurements from an independently characterized multispectral imager. Emulated results match the imager across all phantoms (<3 % absorption, <1 % reduced scattering). In-vivo test cases (burn wounds and photoaging) illustrate how SFDS can be used to evaluate different multispectral imagers. This approach provides an in-vivo measurement method to evaluate the performance of multispectral imagers specific to their targeted clinical applications and can assist in the design and optimization of new spectral imaging devices.
Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit
NASA Astrophysics Data System (ADS)
Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin
2015-03-01
Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.
Image fusion based on Bandelet and sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi
2018-04-01
Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.
Larsson, Daniel H; Lundström, Ulf; Westermark, Ulrica K; Arsenian Henriksson, Marie; Burvall, Anna; Hertz, Hans M
2013-02-01
Small-animal studies require images with high spatial resolution and high contrast due to the small scale of the structures. X-ray imaging systems for small animals are often limited by the microfocus source. Here, the authors investigate the applicability of liquid-metal-jet x-ray sources for such high-resolution small-animal imaging, both in tomography based on absorption and in soft-tissue tumor imaging based on in-line phase contrast. The experimental arrangement consists of a liquid-metal-jet x-ray source, the small-animal object on a rotating stage, and an imaging detector. The source-to-object and object-to-detector distances are adjusted for the preferred contrast mechanism. Two different liquid-metal-jet sources are used, one circulating a Ga∕In∕Sn alloy and the other an In∕Ga alloy for higher penetration through thick tissue. Both sources are operated at 40-50 W electron-beam power with ∼7 μm x-ray spots, providing high spatial resolution in absorption imaging and high spatial coherence for the phase-contrast imaging. High-resolution absorption imaging is demonstrated on mice with CT, showing 50 μm bone details in the reconstructed slices. High-resolution phase-contrast soft-tissue imaging shows clear demarcation of mm-sized tumors at much lower dose than is required in absorption. This is the first application of liquid-metal-jet x-ray sources for whole-body small-animal x-ray imaging. In absorption, the method allows high-resolution tomographic skeletal imaging with potential for significantly shorter exposure times due to the power scalability of liquid-metal-jet sources. In phase contrast, the authors use a simple in-line arrangement to show distinct tumor demarcation of few-mm-sized tumors. This is, to their knowledge, the first small-animal tumor visualization with a laboratory phase-contrast system.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
Noise properties and task-based evaluation of diffraction-enhanced imaging
Brankov, Jovan G.; Saiz-Herranz, Alejandro; Wernick, Miles N.
2014-01-01
Abstract. Diffraction-enhanced imaging (DEI) is an emerging x-ray imaging method that simultaneously yields x-ray attenuation and refraction images and holds great promise for soft-tissue imaging. The DEI has been mainly studied using synchrotron sources, but efforts have been made to transition the technology to more practical implementations using conventional x-ray sources. The main technical challenge of this transition lies in the relatively lower x-ray flux obtained from conventional sources, leading to photon-limited data contaminated by Poisson noise. Several issues that must be understood in order to design and optimize DEI imaging systems with respect to noise performance are addressed. Specifically, we: (a) develop equations describing the noise properties of DEI images, (b) derive the conditions under which the DEI algorithm is statistically optimal, (c) characterize the imaging performance that can be obtained as measured by task-based metrics, and (d) consider image-processing steps that may be employed to mitigate noise effects. PMID:26158056
PLUS: open-source toolkit for ultrasound-guided intervention systems.
Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor
2014-10-01
A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.
SIMA: Python software for analysis of dynamic fluorescence imaging data.
Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila
2014-01-01
Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.
Pattern-projected schlieren imaging method using a diffractive optics element
NASA Astrophysics Data System (ADS)
Min, Gihyeon; Lee, Byung-Tak; Kim, Nac Woo; Lee, Munseob
2018-04-01
We propose a novel schlieren imaging method by projecting a random dot pattern, which is generated in a light source module that includes a diffractive optical element. All apparatuses are located in the source side, which leads to one-body sensor applications. This pattern is distorted by the deflections of schlieren objects such that the displacement vectors of random dots in the pixels can be obtained using the particle image velocity algorithm. The air turbulences induced by a burning candle, boiling pot, heater, and gas torch were successfully imaged, and it was shown that imaging up to a size of 0.7 m × 0.57 m is possible. An algorithm to correct the non-uniform sensitivity according to the position of a schlieren object was analytically derived. This algorithm was applied to schlieren images of lenses. Comparing the corrected versions to the original schlieren images, we showed a corrected uniform sensitivity of 14.15 times on average.
NASA Astrophysics Data System (ADS)
Li, J.; Wen, G.; Li, D.
2018-04-01
Trough mastering background information of Yunnan province grassland resources utilization and ecological conditions to improves grassland elaborating management capacity, it carried out grassland resource investigation work by Yunnan province agriculture department in 2017. The traditional grassland resource investigation method is ground based investigation, which is time-consuming and inefficient, especially not suitable for large scale and hard-to-reach areas. While remote sensing is low cost, wide range and efficient, which can reflect grassland resources present situation objectively. It has become indispensable grassland monitoring technology and data sources and it has got more and more recognition and application in grassland resources monitoring research. This paper researches application of multi-source remote sensing image in Yunnan province grassland resources investigation. First of all, it extracts grassland resources thematic information and conducts field investigation through BJ-2 high space resolution image segmentation. Secondly, it classifies grassland types and evaluates grassland degradation degree through high resolution characteristics of Landsat 8 image. Thirdly, it obtained grass yield model and quality classification through high resolution and wide scanning width characteristics of MODIS images and sample investigate data. Finally, it performs grassland field qualitative analysis through UAV remote sensing image. According to project area implementation, it proves that multi-source remote sensing data can be applied to the grassland resources investigation in Yunnan province and it is indispensable method.
NASA Astrophysics Data System (ADS)
Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.
2015-03-01
Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.
Jones, Ryan M; O'Reilly, Meaghan A; Hynynen, Kullervo
2015-07-01
Experimentally verify a previously described technique for performing passive acoustic imaging through an intact human skull using noninvasive, computed tomography (CT)-based aberration corrections Jones et al. [Phys. Med. Biol. 58, 4981-5005 (2013)]. A sparse hemispherical receiver array (30 cm diameter) consisting of 128 piezoceramic discs (2.5 mm diameter, 612 kHz center frequency) was used to passively listen through ex vivo human skullcaps (n = 4) to acoustic emissions from a narrow-band fixed source (1 mm diameter, 516 kHz center frequency) and from ultrasound-stimulated (5 cycle bursts, 1 Hz pulse repetition frequency, estimated in situ peak negative pressure 0.11-0.33 MPa, 306 kHz driving frequency) Definity™ microbubbles flowing through a thin-walled tube phantom. Initial in vivo feasibility testing of the method was performed. The performance of the method was assessed through comparisons to images generated without skull corrections, with invasive source-based corrections, and with water-path control images. For source locations at least 25 mm from the inner skull surface, the modified reconstruction algorithm successfully restored a single focus within the skull cavity at a location within 1.25 mm from the true position of the narrow-band source. The results obtained from imaging single bubbles are in good agreement with numerical simulations of point source emitters and the authors' previous experimental measurements using source-based skull corrections O'Reilly et al. [IEEE Trans. Biomed. Eng. 61, 1285-1294 (2014)]. In a rat model, microbubble activity was mapped through an intact human skull at pressure levels below and above the threshold for focused ultrasound-induced blood-brain barrier opening. During bursts that led to coherent bubble activity, the location of maximum intensity in images generated with CT-based skull corrections was found to deviate by less than 1 mm, on average, from the position obtained using source-based corrections. Taken together, these results demonstrate the feasibility of using the method to guide bubble-mediated ultrasound therapies in the brain. The technique may also have application in ultrasound-based cerebral angiography.
Pixel-based image fusion with false color mapping
NASA Astrophysics Data System (ADS)
Zhao, Wei; Mao, Shiyi
2003-06-01
In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.
Seismoelectric imaging of shallow targets
Haines, S.S.; Pride, S.R.; Klemperer, S.L.; Biondi, B.
2007-01-01
We have undertaken a series of controlled field experiments to develop seismoelectric experimental methods for near-surface applications and to improve our understanding of seismoelectric phenomena. In a set of off-line geometry surveys (source separated from the receiver line), we place seismic sources and electrode array receivers on opposite sides of a man-made target (two sand-filled trenches) to record separately two previously documented seismoelectric modes: (1) the electromagnetic interface response signal created at the target and (2) the coseismic electric fields located within a compressional seismic wave. With the seismic source point in the center of a linear electrode array, we identify the previously undocumented seismoelectric direct field, and the Lorentz field of the metal hammer plate moving in the earth's magnetic field. We place the seismic source in the center of a circular array of electrodes (radial and circumferential orientations) to analyze the source-related direct and Lorentz fields and to establish that these fields can be understood in terms of simple analytical models. Using an off-line geometry, we create a multifold, 2D image of our trenches as dipping layers, and we also produce a complementary synthetic image through numerical modeling. These images demonstrate that off-line geometry (e.g., crosswell) surveys offer a particularly promising application of the seismoelectric method because they effectively separate the interface response signal from the (generally much stronger) coseismic and source-related fields. ?? 2007 Society of Exploration Geophysicists.
Design of small confocal endo-microscopic probe working under multiwavelength environment
NASA Astrophysics Data System (ADS)
Kim, Young-Duk; Ahn, MyoungKi; Gweon, Dae-Gab
2010-02-01
Recently, optical imaging system is widely used in medical purpose. By using optical imaging system specific diseases can be easily diagnosed at early stage because optical imaging system has high resolution performance and various imaging method. These methods are used to get high resolution image of human body and can be used to verify whether the cell is infected by virus. Confocal microscope is one of the famous imaging systems which is used for in-vivo imaging. Because most of diseases are accompanied with cellular level changes, doctors can diagnosis at early stage by observing the cellular image of human organ. Current research is focused in the development of endo-microscope that has great advantage in accessibility to human body. In this research, I designed small probe that is connected to confocal microscope through optical fiber bundle and work as endo-microscope. And this small probe is mainly designed to correct chromatic aberration to use various laser sources for both fluorescence type and reflection type confocal images. By using two kinds of laser sources at the same time we demonstrated multi-modality confocal endo-microscope.
Implementation issues of the nearfield equivalent source imaging microphone array
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen
2011-01-01
This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ming; Yu, Hengyong, E-mail: hengyong-yu@ieee.org
2015-10-15
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle tomore » cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.« less
A review of multivariate methods in brain imaging data fusion
NASA Astrophysics Data System (ADS)
Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.
2010-03-01
On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.
Multiple source associated particle imaging for simultaneous capture of multiple projections
Bingham, Philip R; Hausladen, Paul A; McConchi, Seth M; Mihalczo, John T; Mullens, James A
2013-11-19
Disclosed herein are representative embodiments of methods, apparatus, and systems for performing neutron radiography. For example, in one exemplary method, an object is interrogated with a plurality of neutrons. The plurality of neutrons includes a first portion of neutrons generated from a first neutron source and a second portion of neutrons generated from a second neutron source. Further, at least some of the first portion and the second portion are generated during a same time period. In the exemplary method, one or more neutrons from the first portion and one or more neutrons from the second portion are detected, and an image of the object is generated based at least in part on the detected neutrons from the first portion and the detected neutrons from the second portion.
Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina
2011-12-01
Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Qin, Xinqiang; Hu, Gang; Hu, Kai
2018-01-01
The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.
Assessment of using ultrasound images as prior for diffuse optical tomography regularization matrix
NASA Astrophysics Data System (ADS)
Althobaiti, Murad; Vavadi, Hamed; Zhu, Quing
2017-02-01
Imaging of tissue with Ultrasound-guided diffuse optical tomography (DOT) is a rising imaging technique to map hemoglobin concentrations within tissue for breast cancer detection and diagnosis. Near-infrared optical imaging received a lot of attention in research as a possible technique to be used for such purpose especially for breast tumors. Since DOT images contrast is closely related to oxygenation and deoxygenating of the hemoglobin, which is an important factor in differentiating malignant and benign tumors. One of the optical imaging modalities used is the diffused optical tomography (DOT); which probes deep scattering tissue (1-5cm) by NIR optical source-detector probe and detects NIR photons in the diffusive regime. The photons in the diffusive regime usually reach the detector without significant information about their source direction and the propagation path. Because of that, the optical reconstruction problem of the medium characteristics is ill-posed even with the tomography and Back-projection techniques. The accurate recovery of images requires an effective image reconstruction method. Here, we illustrate a method in which ultrasound images are encoded as prior for regularization of the inversion matrix. Results were evaluated using phantom experiments of low and high absorption contrasts. This method improves differentiation between the low and the high contrasts targets. Ultimately, this method could improve malignant and benign cases by increasing reconstructed absorption ratio of malignant to benign. Besides that, the phantom results show improvements in target shape as well as the spatial resolution of the DOT reconstructed images.
Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad
2018-02-01
Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.
Multifocus watermarking approach based on discrete cosine transform.
Waheed, Safa Riyadh; Alkawaz, Mohammed Hazim; Rehman, Amjad; Almazyad, Abdulaziz S; Saba, Tanzila
2016-05-01
Image fusion process consolidates data and information from various images of same sight into a solitary image. Each of the source images might speak to a fractional perspective of the scene, and contains both "pertinent" and "immaterial" information. In this study, a new image fusion method is proposed utilizing the Discrete Cosine Transform (DCT) to join the source image into a solitary minimized image containing more exact depiction of the sight than any of the individual source images. In addition, the fused image comes out with most ideal quality image without bending appearance or loss of data. DCT algorithm is considered efficient in image fusion. The proposed scheme is performed in five steps: (1) RGB colour image (input image) is split into three channels R, G, and B for source images. (2) DCT algorithm is applied to each channel (R, G, and B). (3) The variance values are computed for the corresponding 8 × 8 blocks of each channel. (4) Each block of R of source images is compared with each other based on the variance value and then the block with maximum variance value is selected to be the block in the new image. This process is repeated for all channels of source images. (5) Inverse discrete cosine transform is applied on each fused channel to convert coefficient values to pixel values, and then combined all the channels to generate the fused image. The proposed technique can potentially solve the problem of unwanted side effects such as blurring or blocking artifacts by reducing the quality of the subsequent image in image fusion process. The proposed approach is evaluated using three measurement units: the average of Q(abf), standard deviation, and peak Signal Noise Rate. The experimental results of this proposed technique have shown good results as compared with older techniques. © 2016 Wiley Periodicals, Inc.
Global high-frequency source imaging accounting for complexity in Green's functions
NASA Astrophysics Data System (ADS)
Lambert, V.; Zhan, Z.
2017-12-01
The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.
Application of a multiscale maximum entropy image restoration algorithm to HXMT observations
NASA Astrophysics Data System (ADS)
Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi
2016-08-01
This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)
Photoacoustic image reconstruction: a quantitative analysis
NASA Astrophysics Data System (ADS)
Sperl, Jonathan I.; Zell, Karin; Menzenbach, Peter; Haisch, Christoph; Ketzer, Stephan; Marquart, Markus; Koenig, Hartmut; Vogel, Mika W.
2007-07-01
Photoacoustic imaging is a promising new way to generate unprecedented contrast in ultrasound diagnostic imaging. It differs from other medical imaging approaches, in that it provides spatially resolved information about optical absorption of targeted tissue structures. Because the data acquisition process deviates from standard clinical ultrasound, choice of the proper image reconstruction method is crucial for successful application of the technique. In the literature, multiple approaches have been advocated, and the purpose of this paper is to compare four reconstruction techniques. Thereby, we focused on resolution limits, stability, reconstruction speed, and SNR. We generated experimental and simulated data and reconstructed images of the pressure distribution using four different methods: delay-and-sum (DnS), circular backprojection (CBP), generalized 2D Hough transform (HTA), and Fourier transform (FTA). All methods were able to depict the point sources properly. DnS and CBP produce blurred images containing typical superposition artifacts. The HTA provides excellent SNR and allows a good point source separation. The FTA is the fastest and shows the best FWHM. In our study, we found the FTA to show the best overall performance. It allows a very fast and theoretically exact reconstruction. Only a hardware-implemented DnS might be faster and enable real-time imaging. A commercial system may also perform several methods to fully utilize the new contrast mechanism and guarantee optimal resolution and fidelity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Brendon J.; Foreman-Mackey, Daniel; Hogg, David W., E-mail: bj.brewer@auckland.ac.nz
We present and implement a probabilistic (Bayesian) method for producing catalogs from images of stellar fields. The method is capable of inferring the number of sources N in the image and can also handle the challenges introduced by noise, overlapping sources, and an unknown point-spread function. The luminosity function of the stars can also be inferred, even when the precise luminosity of each star is uncertain, via the use of a hierarchical Bayesian model. The computational feasibility of the method is demonstrated on two simulated images with different numbers of stars. We find that our method successfully recovers the inputmore » parameter values along with principled uncertainties even when the field is crowded. We also compare our results with those obtained from the SExtractor software. While the two approaches largely agree about the fluxes of the bright stars, the Bayesian approach provides more accurate inferences about the faint stars and the number of stars, particularly in the crowded case.« less
Grid-based precision aim system and method for disrupting suspect objects
Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.
2014-06-10
A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
K-edge subtraction synchrotron X-ray imaging in bio-medical research.
Thomlinson, W; Elleaume, H; Porra, L; Suortti, P
2018-05-01
High contrast in X-ray medical imaging, while maintaining acceptable radiation dose levels to the patient, has long been a goal. One of the most promising methods is that of K-edge subtraction imaging. This technique, first advanced as long ago as 1953 by B. Jacobson, uses the large difference in the absorption coefficient of elements at energies above and below the K-edge. Two images, one taken above the edge and one below the edge, are subtracted leaving, ideally, only the image of the distribution of the target element. This paper reviews the development of the KES techniques and technology as applied to bio-medical imaging from the early low-power tube sources of X-rays to the latest high-power synchrotron sources. Applications to coronary angiography, functional lung imaging and bone growth are highlighted. A vision of possible imaging with new compact sources is presented. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Mouthaan, Brian E; Rados, Matea; Barsi, Péter; Boon, Paul; Carmichael, David W; Carrette, Evelien; Craiu, Dana; Cross, J Helen; Diehl, Beate; Dimova, Petia; Fabo, Daniel; Francione, Stefano; Gaskin, Vladislav; Gil-Nagel, Antonio; Grigoreva, Elena; Guekht, Alla; Hirsch, Edouard; Hecimovic, Hrvoje; Helmstaedter, Christoph; Jung, Julien; Kalviainen, Reetta; Kelemen, Anna; Kimiskidis, Vasilios; Kobulashvili, Teia; Krsek, Pavel; Kuchukhidze, Giorgi; Larsson, Pål G; Leitinger, Markus; Lossius, Morten I; Luzin, Roman; Malmgren, Kristina; Mameniskiene, Ruta; Marusic, Petr; Metin, Baris; Özkara, Cigdem; Pecina, Hrvoje; Quesada, Carlos M; Rugg-Gunn, Fergus; Rydenhag, Bertil; Ryvlin, Philippe; Scholly, Julia; Seeck, Margitta; Staack, Anke M; Steinhoff, Bernhard J; Stepanov, Valentin; Tarta-Arsene, Oana; Trinka, Eugen; Uzan, Mustafa; Vogt, Viola L; Vos, Sjoerd B; Vulliémoz, Serge; Huiskamp, Geertjan; Leijten, Frans S S; Van Eijsden, Pieter; Braun, Kees P J
2016-05-01
In 2014 the European Union-funded E-PILEPSY project was launched to improve awareness of, and accessibility to, epilepsy surgery across Europe. We aimed to investigate the current use of neuroimaging, electromagnetic source localization, and imaging postprocessing procedures in participating centers. A survey on the clinical use of imaging, electromagnetic source localization, and postprocessing methods in epilepsy surgery candidates was distributed among the 25 centers of the consortium. A descriptive analysis was performed, and results were compared to existing guidelines and recommendations. Response rate was 96%. Standard epilepsy magnetic resonance imaging (MRI) protocols are acquired at 3 Tesla by 15 centers and at 1.5 Tesla by 9 centers. Three centers perform 3T MRI only if indicated. Twenty-six different MRI sequences were reported. Six centers follow all guideline-recommended MRI sequences with the proposed slice orientation and slice thickness or voxel size. Additional sequences are used by 22 centers. MRI postprocessing methods are used in 16 centers. Interictal positron emission tomography (PET) is available in 22 centers; all using 18F-fluorodeoxyglucose (FDG). Seventeen centers perform PET postprocessing. Single-photon emission computed tomography (SPECT) is used by 19 centers, of which 15 perform postprocessing. Four centers perform neither PET nor SPECT in children. Seven centers apply magnetoencephalography (MEG) source localization, and nine apply electroencephalography (EEG) source localization. Fourteen combinations of inverse methods and volume conduction models are used. We report a large variation in the presurgical diagnostic workup among epilepsy surgery centers across Europe. This diversity underscores the need for high-quality systematic reviews, evidence-based recommendations, and harmonization of available diagnostic presurgical methods. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.
Full range line-field parallel swept source imaging utilizing digital refocusing
NASA Astrophysics Data System (ADS)
Fechtig, Daniel J.; Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A.
2015-12-01
We present geometric optics-based refocusing applied to a novel off-axis line-field parallel swept source imaging (LPSI) system. LPSI is an imaging modality based on line-field swept source optical coherence tomography, which permits 3-D imaging at acquisition speeds of up to 1 MHz. The digital refocusing algorithm applies a defocus-correcting phase term to the Fourier representation of complex-valued interferometric image data, which is based on the geometrical optics information of the LPSI system. We introduce the off-axis LPSI system configuration, the digital refocusing algorithm and demonstrate the effectiveness of our method for refocusing volumetric images of technical and biological samples. An increase of effective in-focus depth range from 255 μm to 4.7 mm is achieved. The recovery of the full in-focus depth range might be especially valuable for future high-speed and high-resolution diagnostic applications of LPSI in ophthalmology.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Gaia Data Release 1. Pre-processing and source list creation
NASA Astrophysics Data System (ADS)
Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.
2016-11-01
Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.
Simulation of a compact analyzer-based imaging system with a regular x-ray source
NASA Astrophysics Data System (ADS)
Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.
2017-03-01
Analyzer-based Imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray techniques. PC measures X-ray deflection phenomena when interacting with a sample, which is known to provide higher contrast images of soft tissue than other X-ray methods. This is of high interest in the medical field, in particular for mammogram applications. This paper presents a simulation tool for table-top ABI systems using a conventional polychromatic X-ray source.
Electronic imaging system and technique
Bolstad, J.O.
1984-06-12
A method and system for viewing objects obscurred by intense plasmas or flames (such as a welding arc) includes a pulsed light source to illuminate the object, the peak brightness of the light reflected from the object being greater than the brightness of the intense plasma or flame; an electronic image sensor for detecting a pulsed image of the illuminated object, the sensor being operated as a high-speed shutter; and electronic means for synchronizing the shutter operation with the pulsed light source.
Electronic imaging system and technique
Bolstad, Jon O.
1987-01-01
A method and system for viewing objects obscurred by intense plasmas or flames (such as a welding arc) includes a pulsed light source to illuminate the object, the peak brightness of the light reflected from the object being greater than the brightness of the intense plasma or flame; an electronic image sensor for detecting a pulsed image of the illuminated object, the sensor being operated as a high-speed shutter; and electronic means for synchronizing the shutter operation with the pulsed light source.
NASA Astrophysics Data System (ADS)
Gu, N.; Zhang, H.
2017-12-01
Seismic imaging of fault zones generally involves seismic velocity tomography using first arrival times or full waveforms from earthquakes occurring around the fault zones. However, in most cases seismic velocity tomography only gives smooth image of the fault zone structure. To get high-resolution structure of the fault zones, seismic migration using active seismic data needs to be used. But it is generally too expensive to conduct active seismic surveys, even for 2D. Here we propose to apply the passive seismic imaging method based on seismic interferometry to image fault zone detailed structures. Seismic interferometry generally refers to the construction of new seismic records for virtual sources and receivers by cross correlating and stacking the seismic records on physical receivers from physical sources. In this study, we utilize seismic waveforms recorded on surface seismic stations for each earthquake to construct zero-offset seismic record at each earthquake location as if there was a virtual receiver at each earthquake location. We have applied this method to image the fault zone structure around the 2013 Mw6.6 Lushan earthquake. After the occurrence of the mainshock, a 29-station temporary array is installed to monitor aftershocks. In this study, we first select aftershocks along several vertical cross sections approximately normal to the fault strike. Then we create several zero-offset seismic reflection sections by seismic interferometry with seismic waveforms from aftershocks around each section. Finally we migrate these zero-offset sections to create seismic structures around the fault zones. From these migration images, we can clearly identify strong reflectors, which correspond to major reverse fault where the mainshock occurs. This application shows that it is possible to image detailed fault zone structures with passive seismic sources.
NASA Astrophysics Data System (ADS)
Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele
2018-05-01
A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.
Backprojection of volcanic tremor
Haney, Matthew M.
2014-01-01
Backprojection has become a powerful tool for imaging the rupture process of global earthquakes. We demonstrate the ability of backprojection to illuminate and track volcanic sources as well. We apply the method to the seismic network from Okmok Volcano, Alaska, at the time of an escalation in tremor during the 2008 eruption. Although we are able to focus the wavefield close to the location of the active cone, the network array response lacks sufficient resolution to reveal kilometer-scale changes in tremor location. By deconvolving the response in successive backprojection images, we enhance resolution and find that the tremor source moved toward an intracaldera lake prior to its escalation. The increased tremor therefore resulted from magma-water interaction, in agreement with the overall phreatomagmatic character of the eruption. Imaging of eruption tremor shows that time reversal methods, such as backprojection, can provide new insights into the temporal evolution of volcanic sources.
NASA Astrophysics Data System (ADS)
Heleno, Sandra; Matias, Magda; Pina, Pedro
2015-04-01
Visual interpretation of satellite imagery remains extremely demanding in terms of resources and time, especially when dealing with numerous multi-scale landslides affecting wide areas, such as is the case of rainfall-induced shallow landslides. Applying automated methods can contribute to more efficient landslide mapping and updating of existing inventories, and in recent years the number and variety of approaches is rapidly increasing. Very High Resolution (VHR) images, acquired by space-borne sensors with sub-metric precision, such as Ikonos, Quickbird, Geoeye and Worldview, are increasingly being considered as the best option for landslide mapping, but these new levels of spatial detail also present new challenges to state of the art image analysis tools, asking for automated methods specifically suited to map landslide events on VHR optical images. In this work we develop and test a methodology for semi-automatic landslide recognition and mapping of landslide source and transport areas. The method combines object-based image analysis and a Support Vector Machine supervised learning algorithm, and was tested using a GeoEye-1 multispectral image, sensed 3 days after a damaging landslide event in Madeira Island, together with a pre-event LiDAR DEM. Our approach has proved successful in the recognition of landslides on a 15 Km2-wide study area, with 81 out of 85 landslides detected in its validation regions. The classifier also showed reasonable performance (false positive rate 60% and false positive rate below 36% in both validation regions) in the internal mapping of landslide source and transport areas, in particular in the sunnier east-facing slopes. In the less illuminated areas the classifier is still able to accurately map the source areas, but performs poorly in the mapping of landslide transport areas.
Collimator-free photon tomography
Dilmanian, F. Avraham; Barbour, Randall L.
1998-10-06
A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image.
High resolution x-ray and gamma ray imaging using diffraction lenses with mechanically bent crystals
Smither, Robert K [Hinsdale, IL
2008-12-23
A method for high spatial resolution imaging of a plurality of sources of x-ray and gamma-ray radiation is provided. High quality mechanically bent diffracting crystals of 0.1 mm radial width are used for focusing the radiation and directing the radiation to an array of detectors which is used for analyzing their addition to collect data as to the location of the source of radiation. A computer is used for converting the data to an image. The invention also provides for the use of a multi-component high resolution detector array and for narrow source and detector apertures.
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
Park, Jae-Hyeung; Kim, Hak-Rin; Kim, Yunhee; Kim, Joohwan; Hong, Jisoo; Lee, Sin-Doo; Lee, Byoungho
2004-12-01
A depth-enhanced three-dimensional-two-dimensional convertible display that uses a polymer-dispersed liquid crystal based on the principle of integral imaging is proposed. In the proposed method, a lens array is located behind a transmission-type display panel to form an array of point-light sources, and a polymer-dispersed liquid crystal is electrically controlled to pass or to scatter light coming from these point-light sources. Therefore, three-dimensional-two-dimensional conversion is accomplished electrically without any mechanical movement. Moreover, the nonimaging structure of the proposed method increases the expressible depth range considerably. We explain the method of operation and present experimental results.
SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis
NASA Astrophysics Data System (ADS)
Young, M. D.; Hayashi, S.; Gopu, A.
2014-05-01
As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.
New Techniques for High-Contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Technical Reports Server (NTRS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Grady, C. A.;
2012-01-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the Strategic Exploration of Exoplanets and Disks (SEEDS) survey. We implement seyeral new algorithms, includbg a method to centroid saturated images, a trimmed mean for combining an image sequence that reduces noise by up to approx 20%, and a robust and computationally fast method to compute the sensitivitv of a high-contrast obsen-ation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI is freely available for download at www.github.com/t-brandt/acorns_-adi under a BSD license
System and method for image registration of multiple video streams
Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton
2018-02-06
Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.
High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.
Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John
2017-02-01
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
An automated multi-scale network-based scheme for detection and location of seismic sources
NASA Astrophysics Data System (ADS)
Poiata, N.; Aden-Antoniow, F.; Satriano, C.; Bernard, P.; Vilotte, J. P.; Obara, K.
2017-12-01
We present a recently developed method - BackTrackBB (Poiata et al. 2016) - allowing to image energy radiation from different seismic sources (e.g., earthquakes, LFEs, tremors) in different tectonic environments using continuous seismic records. The method exploits multi-scale frequency-selective coherence in the wave field, recorded by regional seismic networks or local arrays. The detection and location scheme is based on space-time reconstruction of the seismic sources through an imaging function built from the sum of station-pair time-delay likelihood functions, projected onto theoretical 3D time-delay grids. This imaging function is interpreted as the location likelihood of the seismic source. A signal pre-processing step constructs a multi-band statistical representation of the non stationary signal, i.e. time series, by means of higher-order statistics or energy envelope characteristic functions. Such signal-processing is designed to detect in time signal transients - of different scales and a priori unknown predominant frequency - potentially associated with a variety of sources (e.g., earthquakes, LFE, tremors), and to improve the performance and the robustness of the detection-and-location location step. The initial detection-location, based on a single phase analysis with the P- or S-phase only, can then be improved recursively in a station selection scheme. This scheme - exploiting the 3-component records - makes use of P- and S-phase characteristic functions, extracted after a polarization analysis of the event waveforms, and combines the single phase imaging functions with the S-P differential imaging functions. The performance of the method is demonstrated here in different tectonic environments: (1) analysis of the one year long precursory phase of 2014 Iquique earthquake in Chile; (2) detection and location of tectonic tremor sources and low-frequency earthquakes during the multiple episodes of tectonic tremor activity in southwestern Japan.
Range determination for scannerless imaging
Muguira, Maritza Rosa; Sackos, John Theodore; Bradley, Bart Davis; Nellums, Robert
2000-01-01
A new method of operating a scannerless range imaging system (e.g., a scannerless laser radar) has been developed. This method is designed to compensate for nonlinear effects which appear in many real-world components. The system operates by determining the phase shift of the laser modulation, which is a physical quantity related physically to the path length between the laser source and the detector, for each pixel of an image.
Method and apparatus for imaging a sample on a device
Trulson, Mark; Stern, David; Fiekowsky, Peter; Rava, Richard; Walton, Ian; Fodor, Stephen P. A.
1996-01-01
The present invention provides methods and systems for detecting a labeled marker on a sample located on a support. The imaging system comprises a body for immobilizing the support, an excitation radiation source and excitation optics to generate and direct the excitation radiation at the sample. In response, labeled material on the sample emits radiation which has a wavelength that is different from the excitation wavelength, which radiation is collected by collection optics and imaged onto a detector which generates an image of the sample.
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, S.; Bernstein, L.S.; Bien, F.; Gersh, M.E.; Goldstein, N.
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution. 14 figs.
Systems and methods for optically measuring properties of hydrocarbon fuel gases
Adler-Golden, Steven; Bernstein, Lawrence S.; Bien, Fritz; Gersh, Michael E.; Goldstein, Neil
1998-10-13
A system and method for optical interrogation and measurement of a hydrocarbon fuel gas includes a light source generating light at near-visible wavelengths. A cell containing the gas is optically coupled to the light source which is in turn partially transmitted by the sample. A spectrometer disperses the transmitted light and captures an image thereof. The image is captured by a low-cost silicon-based two-dimensional CCD array. The captured spectral image is processed by electronics for determining energy or BTU content and composition of the gas. The innovative optical approach provides a relatively inexpensive, durable, maintenance-free sensor and method which is reliable in the field and relatively simple to calibrate. In view of the above, accurate monitoring is possible at a plurality of locations along the distribution chain leading to more efficient distribution.
Smith, Ryan L; Haworth, Annette; Panettieri, Vanessa; Millar, Jeremy L; Franich, Rick D
2016-05-01
Verification of high dose rate (HDR) brachytherapy treatment delivery is an important step, but is generally difficult to achieve. A technique is required to monitor the treatment as it is delivered, allowing comparison with the treatment plan and error detection. In this work, we demonstrate a method for monitoring the treatment as it is delivered and directly comparing the delivered treatment with the treatment plan in the clinical workspace. This treatment verification system is based on a flat panel detector (FPD) used for both pre-treatment imaging and source tracking. A phantom study was conducted to establish the resolution and precision of the system. A pretreatment radiograph of a phantom containing brachytherapy catheters is acquired and registration between the measurement and treatment planning system (TPS) is performed using implanted fiducial markers. The measured catheter paths immediately prior to treatment were then compared with the plan. During treatment delivery, the position of the (192)Ir source is determined at each dwell position by measuring the exit radiation with the FPD and directly compared to the planned source dwell positions. The registration between the two corresponding sets of fiducial markers in the TPS and radiograph yielded a registration error (residual) of 1.0 mm. The measured catheter paths agreed with the planned catheter paths on average to within 0.5 mm. The source positions measured with the FPD matched the planned source positions for all dwells on average within 0.6 mm (s.d. 0.3, min. 0.1, max. 1.4 mm). We have demonstrated a method for directly comparing the treatment plan with the delivered treatment that can be easily implemented in the clinical workspace. Pretreatment imaging was performed, enabling visualization of the implant before treatment delivery and identification of possible catheter displacement. Treatment delivery verification was performed by measuring the source position as each dwell was delivered. This approach using a FPD for imaging and source tracking provides a noninvasive method of acquiring extensive information for verification in HDR prostate brachytherapy.
Instrumentation in Diffuse Optical Imaging
Zhang, Xiaofeng
2014-01-01
Diffuse optical imaging is highly versatile and has a very broad range of applications in biology and medicine. It covers diffuse optical tomography, fluorescence diffuse optical tomography, bioluminescence, and a number of other new imaging methods. These methods of diffuse optical imaging have diversified instrument configurations but share the same core physical principle – light propagation in highly diffusive media, i.e., the biological tissue. In this review, the author summarizes the latest development in instrumentation and methodology available to diffuse optical imaging in terms of system architecture, light source, photo-detection, spectral separation, signal modulation, and lastly imaging contrast. PMID:24860804
NASA Astrophysics Data System (ADS)
van Haver, Sven; Janssen, Olaf T. A.; Braat, Joseph J. M.; Janssen, Augustus J. E. M.; Urbach, H. Paul; Pereira, Silvania F.
2008-03-01
In this paper we introduce a new mask imaging algorithm that is based on the source point integration method (or Abbe method). The method presented here distinguishes itself from existing methods by exploiting the through-focus imaging feature of the Extended Nijboer-Zernike (ENZ) theory of diffraction. An introduction to ENZ-theory and its application in general imaging is provided after which we describe the mask imaging scheme that can be derived from it. The remainder of the paper is devoted to illustrating the advantages of the new method over existing methods (Hopkins-based). To this extent several simulation results are included that illustrate advantages arising from: the accurate incorporation of isolated structures, the rigorous treatment of the object (mask topography) and the fully vectorial through-focus image formation of the ENZ-based algorithm.
Automated source classification of new transient sources
NASA Astrophysics Data System (ADS)
Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.
2017-10-01
The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.
Raspberry Pi-powered imaging for plant phenotyping.
Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A
2018-03-01
Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
A Fabry-Perot interferometric imaging spectrometer in LWIR
NASA Astrophysics Data System (ADS)
Zhang, Fang; Gao, Jiaobo; Wang, Nan; Wu, Jianghui; Meng, Hemin; Zhang, Lei; Gao, Shan
2017-02-01
With applications ranging from the desktop to remote sensing, the long wave infrared (LWIR) interferometric spectral imaging system is always with huge volume and large weight. In order to miniaturize and light the instrument, a new method of LWIR spectral imaging system based on a variable gap Fabry-Perot (FP) interferometer is researched. With the system working principle analyzed, theoretically, it is researched that how to make certain the primary parameter, such as, wedge angle of interferometric cavity, f-number of the imaging lens and the relationship between the wedge angle and the modulation of the interferogram. A prototype is developed and a good experimental result of a uniform radiation source, a monochromatic source, is obtained. The research shows that besides high throughput and high spectral resolution, the advantage of miniaturization is also simultaneously achieved in this method.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Method and apparatus for the simultaneous display and correlation of independently generated images
Vaitekunas, Jeffrey J.; Roberts, Ronald A.
1991-01-01
An apparatus and method for location by location correlation of multiple images from Non-Destructive Evaluation (NDE) and other sources. Multiple images of a material specimen are displayed on one or more monitors of an interactive graphics system. Specimen landmarks are located in each image and mapping functions from a reference image to each other image are calcuated using the landmark locations. A location selected by positioning a cursor in the reference image is mapped to the other images and location identifiers are simultaneously displayed in those images. Movement of the cursor in the reference image causes simultaneous movement of the location identifiers in the other images to positions corresponding to the location of the reference image cursor.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ma, Xibo; Tian, Jie; Zhang, Bo; Zhang, Xing; Xue, Zhenwen; Dong, Di; Han, Dong
2011-03-01
Among many optical molecular imaging modalities, bioluminescence imaging (BLI) has more and more wide application in tumor detection and evaluation of pharmacodynamics, toxicity, pharmacokinetics because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, BLI can not present the accurate location and intensity of the inner bioluminescence sources such as in the bone, liver or lung etc. Bioluminescent tomography (BLT) shows its advantage in determining the bioluminescence source distribution inside a small animal or phantom. Considering the deficiency of two-dimensional imaging modality, we developed three-dimensional tomography to reconstruct the information of the bioluminescence source distribution in transgenic mOC-Luc mice bone with the boundary measured data. In this paper, to study the osteocalcin (OC) accumulation in transgenic mOC-Luc mice bone, a BLT reconstruction method based on multilevel adaptive finite element (FEM) algorithm was used for localizing and quantifying multi bioluminescence sources. Optical and anatomical information of the tissues are incorporated as a priori knowledge in this method, which can reduce the ill-posedness of BLT. The data was acquired by the dual modality BLT and Micro CT prototype system that was developed by us. Through temperature control and absolute intensity calibration, a relative accurate intensity can be calculated. The location of the OC accumulation was reconstructed, which was coherent with the principle of bone differentiation. This result also was testified by ex vivo experiment in the black 96-plate well using the BLI system and the chemiluminescence apparatus.
NASA Technical Reports Server (NTRS)
Camci, C.; Kim, K.; Hippensteele, S. A.
1992-01-01
A new image processing based color capturing technique for the quantitative interpretation of liquid crystal images used in convective heat transfer studies is presented. This method is highly applicable to the surfaces exposed to convective heating in gas turbine engines. It is shown that, in the single-crystal mode, many of the colors appearing on the heat transfer surface correlate strongly with the local temperature. A very accurate quantitative approach using an experimentally determined linear hue vs temperature relation is found to be possible. The new hue-capturing process is discussed in terms of the strength of the light source illuminating the heat transfer surface, the effect of the orientation of the illuminating source with respect to the surface, crystal layer uniformity, and the repeatability of the process. The present method is more advantageous than the multiple filter method because of its ability to generate many isotherms simultaneously from a single-crystal image at a high resolution in a very time-efficient manner.
NASA Astrophysics Data System (ADS)
Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif
2018-02-01
Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.
Modeling of the laser device for the stress therapy
NASA Astrophysics Data System (ADS)
Matveev, Nikolai V.; Shcheglov, Sergey A.; Romanova, Galina E.; Koneva, Ð.¢atiana A.
2017-05-01
Recently there is a great interest to the drug-free methods of treatment of various diseases. For example, audiovisual therapy is used for the stress therapy. The main destination of the method is the health care and well-being. Visual content in the given case is formed when laser radiation is passing through the optical mediums and elements. The therapy effect is achieved owing to the color varying and complicated structure of the picture which is produced by the refraction, dispersion effects, diffraction and interference. As the laser source we use three laser sources with wavelengths of 445 nm, 520 nm and 640 nm and the optical power up to 1 W. The beam is guided to the optical element which is responsible for the final image of the dome surface. The dynamic image can be achieved by the rotating of the optical element when the laser beam is static or by scanning the surface of the element. Previous research has shown that the complexity of the image connected to the therapy effect. The image was chosen experimentally in practice. The evaluation was performed using the fractal dimension calculation for the produced image. In this work we model the optical image on the surface formed by the laser sources together with the optical elements. Modeling is performed in two stages. On the first stage we perform the simple modeling taking into account simple geometrical effects and specify the optical models of the sources.
Automatic specular reflections removal for endoscopic images
NASA Astrophysics Data System (ADS)
Tan, Ke; Wang, Bin; Gao, Yuan
2017-07-01
Endoscopy imaging is utilized to provide a realistic view about the surfaces of organs inside the human body. Owing to the damp internal environment, these surfaces usually have a glossy appearance showing specular reflections. For many computer vision algorithms, the highlights created by specular reflections may become a significant source of error. In this paper, we present a novel method for restoration of the specular reflection regions from a single image. Specular restoration process starts with generating a substitute specular-free image with RPCA method. Then the specular removed image was obtained by taking the binary weighting template of highlight regions as the weighting for merging the original specular image and the substitute image. The modified template was furthermore discussed for the concealment of artificial effects in the edge of specular regions. Experimental results on the removal of the endoscopic image with specular reflections demonstrate the efficiency of the proposed method comparing to the existing methods.
Applying Standard Interfaces to a Process-Control Language
NASA Technical Reports Server (NTRS)
Berthold, Richard T.
2005-01-01
A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.
Method for localizing and isolating an errant process step
Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.
2003-01-01
A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin
2016-12-01
Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Peng; Hutton, Brian F.; Holstensson, Maria
2015-12-15
Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less
Techniques of noninvasive optical tomographic imaging
NASA Astrophysics Data System (ADS)
Rosen, Joseph; Abookasis, David; Gokhler, Mark
2006-01-01
Recently invented methods of optical tomographic imaging through scattering and absorbing media are presented. In one method, the three-dimensional structure of an object hidden between two biological tissues is recovered from many noisy speckle pictures obtained on the output of a multi-channeled optical imaging system. Objects are recovered from many speckled images observed by a digital camera through two stereoscopic microlens arrays. Each microlens in each array generates a speckle image of the object buried between the layers. In the computer each image is Fourier transformed jointly with an image of the speckled point-like source captured under the same conditions. A set of the squared magnitudes of the Fourier-transformed pictures is accumulated to form a single average picture. This final picture is again Fourier transformed, resulting in the three-dimensional reconstruction of the hidden object. In the other method, the effect of spatial longitudinal coherence is used for imaging through an absorbing layer with different thickness, or different index of refraction, along the layer. The technique is based on synthesis of multiple peak spatial degree of coherence. This degree of coherence enables us to scan simultaneously different sample points on different altitudes, and thus decreases the acquisition time. The same multi peak degree of coherence is also used for imaging through the absorbing layer. Our entire experiments are performed with a quasi-monochromatic light source. Therefore problems of dispersion and inhomogeneous absorption are avoided.
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
NASA Astrophysics Data System (ADS)
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
Attique, Muhammad; Gilanie, Ghulam; Hafeez-Ullah; Mehmood, Malik S.; Naweed, Muhammad S.; Ikram, Masroor; Kamran, Javed A.; Vitkin, Alex
2012-01-01
Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described. PMID:22479421
A source-channel coding approach to digital image protection and self-recovery.
Sarreshtedari, Saeed; Akhaee, Mohammad Ali
2015-07-01
Watermarking algorithms have been widely applied to the field of image forensics recently. One of these very forensic applications is the protection of images against tampering. For this purpose, we need to design a watermarking algorithm fulfilling two purposes in case of image tampering: 1) detecting the tampered area of the received image and 2) recovering the lost information in the tampered zones. State-of-the-art techniques accomplish these tasks using watermarks consisting of check bits and reference bits. Check bits are used for tampering detection, whereas reference bits carry information about the whole image. The problem of recovering the lost reference bits still stands. This paper is aimed at showing that having the tampering location known, image tampering can be modeled and dealt with as an erasure error. Therefore, an appropriate design of channel code can protect the reference bits against tampering. In the present proposed method, the total watermark bit-budget is dedicated to three groups: 1) source encoder output bits; 2) channel code parity bits; and 3) check bits. In watermark embedding phase, the original image is source coded and the output bit stream is protected using appropriate channel encoder. For image recovery, erasure locations detected by check bits help channel erasure decoder to retrieve the original source encoded image. Experimental results show that our proposed scheme significantly outperforms recent techniques in terms of image quality for both watermarked and recovered image. The watermarked image quality gain is achieved through spending less bit-budget on watermark, while image recovery quality is considerably improved as a consequence of consistent performance of designed source and channel codes.
Quantum ghost imaging through turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, P. Ben; Howland, Gregory A.; Howell, John C.
2011-05-15
We investigate the effect of turbulence on quantum ghost imaging. We use entangled photons and demonstrate that for a specific experimental configuration the effect of turbulence can be greatly diminished. By decoupling the entangled photon source from the ghost-imaging central image plane, we are able to dramatically increase the ghost-image quality. When imaging a test pattern through turbulence, this method increases the imaged pattern visibility from V=0.15{+-}0.04 to 0.42{+-}0.04.
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
3D-SIFT-Flow for atlas-based CT liver image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yan, E-mail: xuyan04@gmail.com; Xu, Chenchao, E-mail: chenchaoxu33@gmail.com; Kuang, Xiao, E-mail: kuangxiao.ace@gmail.com
Purpose: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. Methods: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation.more » In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Results: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Conclusions: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.« less
Dissociation of item and source memory in rhesus monkeys.
Basile, Benjamin M; Hampton, Robert R
2017-09-01
Source memory, or memory for the context in which a memory was formed, is a defining characteristic of human episodic memory and source memory errors are a debilitating symptom of memory dysfunction. Evidence for source memory in nonhuman primates is sparse despite considerable evidence for other types of sophisticated memory and the practical need for good models of episodic memory in nonhuman primates. A previous study showed that rhesus monkeys confused the identity of a monkey they saw with a monkey they heard, but only after an extended memory delay. This suggests that they initially remembered the source - visual or auditory - of the information but forgot the source as time passed. Here, we present a monkey model of source memory that is based on this previous study. In each trial, monkeys studied two images, one that they simply viewed and touched and the other that they classified as a bird, fish, flower, or person. In a subsequent memory test, they were required to select the image from one source but avoid the other. With training, monkeys learned to suppress responding to images from the to-be-avoided source. After longer memory intervals, monkeys continued to show reliable item memory, discriminating studied images from distractors, but made many source memory errors. Monkeys discriminated source based on study method, not study order, providing preliminary evidence that our manipulation of retention interval caused errors due to source forgetting instead of source confusion. Finally, some monkeys learned to select remembered images from either source on cue, showing that they did indeed remember both items and both sources. This paradigm potentially provides a new model to study a critical aspect of episodic memory in nonhuman primates. Copyright © 2017 Elsevier B.V. All rights reserved.
An adaptive block-based fusion method with LUE-SSIM for multi-focus images
NASA Astrophysics Data System (ADS)
Zheng, Jianing; Guo, Yongcai; Huang, Yukun
2016-09-01
Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.
NASA Astrophysics Data System (ADS)
Forte, Paulo M. F.; Felgueiras, P. E. R.; Ferreira, Flávio P.; Sousa, M. A.; Nunes-Pereira, Eduardo J.; Bret, Boris P. J.; Belsley, Michael S.
2017-01-01
An automatic optical inspection system for detecting local defects on specular surfaces is presented. The system uses an image display to produce a sequence of structured diffuse illumination patterns and a digital camera to acquire the corresponding sequence of images. An image enhancement algorithm, which measures the local intensity variations between bright- and dark-field illumination conditions, yields a final image in which the defects are revealed with a high contrast. Subsequently, an image segmentation algorithm, which compares statistically the enhanced image of the inspected surface with the corresponding image for a defect-free template, allows separating defects from non-defects with an adjusting decision threshold. The method can be applied to shiny surfaces of any material including metal, plastic and glass. The described method was tested on the plastic surface of a car dashboard system. We were able to detect not only scratches but also dust and fingerprints. In our experiment we observed a detection contrast increase from about 40%, when using an extended light source, to more than 90% when using a structured light source. The presented method is simple, robust and can be carried out with short cycle times, making it appropriate for applications in industrial environments.
Coupled dictionary learning for joint MR image restoration and segmentation
NASA Astrophysics Data System (ADS)
Yang, Xuesong; Fan, Yong
2018-03-01
To achieve better segmentation of MR images, image restoration is typically used as a preprocessing step, especially for low-quality MR images. Recent studies have demonstrated that dictionary learning methods could achieve promising performance for both image restoration and image segmentation. These methods typically learn paired dictionaries of image patches from different sources and use a common sparse representation to characterize paired image patches, such as low-quality image patches and their corresponding high quality counterparts for the image restoration, and image patches and their corresponding segmentation labels for the image segmentation. Since learning these dictionaries jointly in a unified framework may improve the image restoration and segmentation simultaneously, we propose a coupled dictionary learning method to concurrently learn dictionaries for joint image restoration and image segmentation based on sparse representations in a multi-atlas image segmentation framework. Particularly, three dictionaries, including a dictionary of low quality image patches, a dictionary of high quality image patches, and a dictionary of segmentation label patches, are learned in a unified framework so that the learned dictionaries of image restoration and segmentation can benefit each other. Our method has been evaluated for segmenting the hippocampus in MR T1 images collected with scanners of different magnetic field strengths. The experimental results have demonstrated that our method achieved better image restoration and segmentation performance than state of the art dictionary learning and sparse representation based image restoration and image segmentation methods.
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
High-resolution reconstruction for terahertz imaging.
Xu, Li-Min; Fan, Wen-Hui; Liu, Jia
2014-11-20
We present a high-resolution (HR) reconstruction model and algorithms for terahertz imaging, taking advantage of super-resolution methodology and algorithms. The algorithms used include projection onto a convex sets approach, iterative backprojection approach, Lucy-Richardson iteration, and 2D wavelet decomposition reconstruction. Using the first two HR reconstruction methods, we successfully obtain HR terahertz images with improved definition and lower noise from four low-resolution (LR) 22×24 terahertz images taken from our homemade THz-TDS system at the same experimental conditions with 1.0 mm pixel. Using the last two HR reconstruction methods, we transform one relatively LR terahertz image to a HR terahertz image with decreased noise. This indicates potential application of HR reconstruction methods in terahertz imaging with pulsed and continuous wave terahertz sources.
Collimator-free photon tomography
Dilmanian, F.A.; Barbour, R.L.
1998-10-06
A method of uncollimated single photon emission computed tomography includes administering a radioisotope to a patient for producing gamma ray photons from a source inside the patient. Emissivity of the photons is measured externally of the patient with an uncollimated gamma camera at a plurality of measurement positions surrounding the patient for obtaining corresponding energy spectrums thereat. Photon emissivity at the plurality of measurement positions is predicted using an initial prediction of an image of the source. The predicted and measured photon emissivities are compared to obtain differences therebetween. Prediction and comparison is iterated by updating the image prediction until the differences are below a threshold for obtaining a final prediction of the source image. 6 figs.
Wideband RELAX and wideband CLEAN for aeroacoustic imaging
NASA Astrophysics Data System (ADS)
Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu
2004-02-01
Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.
Wideband RELAX and wideband CLEAN for aeroacoustic imaging.
Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu
2004-02-01
Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
A line-source method for aligning on-board and other pinhole SPECT systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-15
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less
A line-source method for aligning on-board and other pinhole SPECT systems
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-01-01
Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537
Breast ultrasound computed tomography using waveform inversion with source encoding
NASA Astrophysics Data System (ADS)
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
NASA Astrophysics Data System (ADS)
Allman, Derek; Reiter, Austin; Bell, Muyinatu
2018-02-01
We previously proposed a method of removing reflection artifacts in photoacoustic images that uses deep learning. Our approach generally relies on using simulated photoacoustic channel data to train a convolutional neural network (CNN) that is capable of distinguishing sources from artifacts based on unique differences in their spatial impulse responses (manifested as depth-based differences in wavefront shapes). In this paper, we directly compare a CNN trained with our previous continuous transducer model to a CNN trained with an updated discrete acoustic receiver model that more closely matches an experimental ultrasound transducer. These two CNNs were trained with simulated data and tested on experimental data. The CNN trained using the continuous receiver model correctly classified 100% of sources and 70.3% of artifacts in the experimental data. In contrast, the CNN trained using the discrete receiver model correctly classified 100% of sources and 89.7% of artifacts in the experimental images. The 19.4% increase in artifact classification accuracy indicates that an acoustic receiver model that closely mimics the experimental transducer plays an important role in improving the classification of artifacts in experimental photoacoustic data. Results are promising for developing a method to display CNN-based images that remove artifacts in addition to only displaying network-identified sources as previously proposed.
Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources
Rahimi, Azar; Xu, Jingjia; Wang, Linwei
2013-01-01
Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735
AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source
NASA Astrophysics Data System (ADS)
Nightingale, J. W.; Dye, S.; Massey, Richard J.
2018-05-01
This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.
Discrete frequency infrared microspectroscopy and imaging with a tunable quantum cascade laser
Kole, Matthew R.; Reddy, Rohith K.; Schulmerich, Matthew V.; Gelber, Matthew K.; Bhargava, Rohit
2012-01-01
Fourier-transform infrared imaging (FT-IR) is a well-established modality but requires the acquisition of a spectrum over a large bandwidth, even in cases where only a few spectral features may be of interest. Discrete frequency infrared (DF-IR) methods are now emerging in which a small number of measurements may provide all the analytical information needed. The DF-IR approach is enabled by the development of new sources integrating frequency selection, in particular of tunable, narrow-bandwidth sources with enough power at each wavelength to successfully make absorption measurements. Here, we describe a DF-IR imaging microscope that uses an external cavity quantum cascade laser (QCL) as a source. We present two configurations, one with an uncooled bolometer as a detector and another with a liquid nitrogen cooled Mercury Cadmium Telluride (MCT) detector and compare their performance to a commercial FT-IR imaging instrument. We examine the consequences of the coherent properties of the beam with respect to imaging and compare these observations to simulations. Additionally, we demonstrate that the use of a tunable laser source represents a distinct advantage over broadband sources when using a small aperture (narrower than the wavelength of light) to perform high-quality point mapping. The two advances highlight the potential application areas for these emerging sources in IR microscopy and imaging. PMID:23113653
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
NASA Astrophysics Data System (ADS)
Wright, L.; Coddington, O.; Pilewskie, P.
2015-12-01
Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
van Doorn, Andrea
2017-01-01
Generic red, green, and blue images can be regarded as data sources of coarse (three bins) local spectra, typical data volumes are 104 to 107 spectra. Image data bases often yield hundreds or thousands of images, yielding data sources of 109 to 1010 spectra. There is usually no calibration, and there often are various nonlinear image transformations involved. However, we argue that sheer numbers make up for such ambiguity. We propose a model of spectral data mining that applies to the sublunar realm, spectra due to the scattering of daylight by objects from the generic terrestrial environment. The model involves colorimetry and ecological physics. Whereas the colorimetry is readily dealt with, one needs to handle the ecological physics with heuristic methods. The results suggest evolutionary causes of the human visual system. We also suggest effective methods to generate red, green, and blue color gamuts for various terrains. PMID:28989697
The Southern Hemisphere VLBI experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, R.A.; Meier, D.L.; Louie, A.P.
1989-07-01
Six radio telescopes were operated as the first Southern Hemisphere VLBI array in April and May 1982. Observations were made at 2.3 and 8.4 GHz. This array provided VLBI modeling and hybrid imaging of celestial radio sources in the Southern Hemisphere, high-accuracy VLBI geodesy between Southern Hemisphere sites, and subarcsecond radio astrometry of celestial sources south of declination -45 deg. The goals and implementation of the array are discussed, the methods of modeling and hybrid image production are explained, and the VLBI structure of the sources that were observed is summarized. 36 refs.
Scaled SFS method for Lambertian surface 3D measurement under point source lighting.
Ma, Long; Lyu, Yi; Pei, Xin; Hu, Yan Min; Sun, Feng Ming
2018-05-28
A Lambertian surface is a kind of very important assumption in shape from shading (SFS), which is widely used in many measurement cases. In this paper, a novel scaled SFS method is developed to measure the shape of a Lambertian surface with dimensions. In which, a more accurate light source model is investigated under the illumination of a simple point light source, the relationship between surface depth map and the recorded image grayscale is established by introducing the camera matrix into the model. Together with the constraints of brightness, smoothness and integrability, the surface shape with dimensions can be obtained by analyzing only one image using the scaled SFS method. The algorithm simulations show a perfect matching between the simulated structures and the results, the rebuilding root mean square error (RMSE) is below 0.6mm. Further experiment is performed by measuring a PVC tube internal surface, the overall measurement error lies below 2%.
NASA Astrophysics Data System (ADS)
Poddar, Raju; Migacz, Justin V.; Schwartz, Daniel M.; Werner, John S.; Gorczynska, Iwona
2017-10-01
We present noninvasive, three-dimensional, depth-resolved imaging of human retinal and choroidal blood circulation with a swept-source optical coherence tomography (OCT) system at 1065-nm center wavelength. Motion contrast OCT imaging was performed with the phase-variance OCT angiography method. A Fourier-domain mode-locked light source was used to enable an imaging rate of 1.7 MHz. We experimentally demonstrate the challenges and advantages of wide-field OCT angiography (OCTA). In the discussion, we consider acquisition time, scanning area, scanning density, and their influence on visualization of selected features of the retinal and choroidal vascular networks. The OCTA imaging was performed with a field of view of 16 deg (5 mm×5 mm) and 30 deg (9 mm×9 mm). Data were presented in en face projections generated from single volumes and in en face projection mosaics generated from up to 4 datasets. OCTA imaging at 1.7 MHz A-scan rate was compared with results obtained from a commercial OCTA instrument and with conventional ophthalmic diagnostic methods: fundus photography, fluorescein, and indocyanine green angiography. Comparison of images obtained from all methods is demonstrated using the same eye of a healthy volunteer. For example, imaging of retinal pathology is presented in three cases of advanced age-related macular degeneration.
Coherent diffractive imaging methods for semiconductor manufacturing
NASA Astrophysics Data System (ADS)
Helfenstein, Patrick; Mochi, Iacopo; Rajeev, Rajendran; Fernandez, Sara; Ekinci, Yasin
2017-12-01
The paradigm shift of the semiconductor industry moving from deep ultraviolet to extreme ultraviolet lithography (EUVL) brought about new challenges in the fabrication of illumination and projection optics, which constitute one of the core sources of cost of ownership for many of the metrology tools needed in the lithography process. For this reason, lensless imaging techniques based on coherent diffractive imaging started to raise interest in the EUVL community. This paper presents an overview of currently on-going research endeavors that use a number of methods based on lensless imaging with coherent light.
Reconstruction of coded aperture images
NASA Technical Reports Server (NTRS)
Bielefeld, Michael J.; Yin, Lo I.
1987-01-01
Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.
USDA-ARS?s Scientific Manuscript database
A high-throughput Raman chemical imaging method was developed for direct inspection of benzoyl peroxide (BPO) mixed in wheat flour. A 5 W 785 nm line laser (240 mm long and 1 mm wide) was used as a Raman excitation source in a push-broom Raman imaging system. Hyperspectral Raman images were collecte...
Scannerless loss modulated flash color range imaging
Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM
2008-09-02
Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.
Scannerless loss modulated flash color range imaging
Sandusky, John V [Albuquerque, NM; Pitts, Todd Alan [Rio Rancho, NM
2009-02-24
Scannerless loss modulated flash color range imaging methods and apparatus are disclosed for producing three dimensional (3D) images of a target within a scene. Apparatus and methods according to the present invention comprise a light source providing at least three wavelengths (passbands) of illumination that are each loss modulated, phase delayed and simultaneously directed to illuminate the target. Phase delayed light backscattered from the target is spectrally filtered, demodulated and imaged by a planar detector array. Images of the intensity distributions for the selected wavelengths are obtained under modulated and unmodulated (dc) illumination of the target, and the information contained in the images combined to produce a 3D image of the target.
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chi, E-mail: chizheung@gmail.com; Xu, Yiqing; Wei, Xiaoming
2014-07-28
Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated bymore » a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier.« less
Resolving z ~2 galaxy using adaptive coadded source plane reconstruction
NASA Astrophysics Data System (ADS)
Sharma, Soniya; Richard, Johan; Kewley, Lisa; Yuan, Tiantian
2018-06-01
Natural magnification provided by gravitational lensing coupled with Integral field spectrographic observations (IFS) and adaptive optics (AO) imaging techniques have become the frontier of spatially resolved studies of high redshift galaxies (z>1). Mass models of gravitational lenses hold the key for understanding the spatially resolved source–plane (unlensed) physical properties of the background lensed galaxies. Lensing mass models very sensitively control the accuracy and precision of source-plane reconstructions of the observed lensed arcs. Effective source-plane resolution defined by image-plane (observed) point spread function (PSF) makes it challenging to recover the unlensed (source-plane) surface brightness distribution.We conduct a detailed study to recover the source-plane physical properties of z=2 lensed galaxy using spatially resolved observations from two different multiple images of the lensed target. To deal with PSF’s from two data sets on different multiple images of the galaxy, we employ a forward (Source to Image) approach to merge these independent observations. Using our novel technique, we are able to present a detailed analysis of the source-plane dynamics at scales much better than previously attainable through traditional image inversion methods. Moreover, our technique is adapted to magnification, thus allowing us to achieve higher resolution in highly magnified regions of the source. We find that this lensed system is highly evident of a minor merger. In my talk, I present this case study of z=2 lensed galaxy and also discuss the applications of our algorithm to study plethora of lensed systems, which will be available through future telescopes like JWST and GMT.
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
NASA Astrophysics Data System (ADS)
Ihsani, Alvin; Farncombe, Troy
2016-02-01
The modelling of the projection operator in tomographic imaging is of critical importance especially when working with algebraic methods of image reconstruction. This paper proposes a distance-driven projection method which is targeted to single-pinhole single-photon emission computed tomograghy (SPECT) imaging since it accounts for the finite size of the pinhole, and the possible tilting of the detector surface in addition to other collimator-specific factors such as geometric sensitivity. The accuracy and execution time of the proposed method is evaluated by comparing to a ray-driven approach where the pinhole is sub-sampled with various sampling schemes. A point-source phantom whose projections were generated using OpenGATE was first used to compare the resolution of reconstructed images with each method using the full width at half maximum (FWHM). Furthermore, a high-activity Mini Deluxe Phantom (Data Spectrum Corp., Durham, NC, USA) SPECT resolution phantom was scanned using a Gamma Medica X-SPECT system and the signal-to-noise ratio (SNR) and structural similarity of reconstructed images was compared at various projection counts. Based on the reconstructed point-source phantom, the proposed distance-driven approach results in a lower FWHM than the ray-driven approach even when using a smaller detector resolution. Furthermore, based on the Mini Deluxe Phantom, it is shown that the distance-driven approach has consistently higher SNR and structural similarity compared to the ray-driven approach as the counts in measured projections deteriorates.
Decomposition-based transfer distance metric learning for image classification.
Luo, Yong; Liu, Tongliang; Tao, Dacheng; Xu, Chao
2014-09-01
Distance metric learning (DML) is a critical factor for image analysis and pattern recognition. To learn a robust distance metric for a target task, we need abundant side information (i.e., the similarity/dissimilarity pairwise constraints over the labeled data), which is usually unavailable in practice due to the high labeling cost. This paper considers the transfer learning setting by exploiting the large quantity of side information from certain related, but different source tasks to help with target metric learning (with only a little side information). The state-of-the-art metric learning algorithms usually fail in this setting because the data distributions of the source task and target task are often quite different. We address this problem by assuming that the target distance metric lies in the space spanned by the eigenvectors of the source metrics (or other randomly generated bases). The target metric is represented as a combination of the base metrics, which are computed using the decomposed components of the source metrics (or simply a set of random bases); we call the proposed method, decomposition-based transfer DML (DTDML). In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics. The main advantage of the proposed method compared with existing transfer metric learning approaches is that we directly learn the base metric coefficients instead of the target metric. To this end, far fewer variables need to be learned. We therefore obtain more reliable solutions given the limited side information and the optimization tends to be faster. Experiments on the popular handwritten image (digit, letter) classification and challenge natural image annotation tasks demonstrate the effectiveness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zackay, Barak; Ofek, Eran O.
Image coaddition is one of the most basic operations that astronomers perform. In Paper I, we presented the optimal ways to coadd images in order to detect faint sources and to perform flux measurements under the assumption that the noise is approximately Gaussian. Here, we build on these results and derive from first principles a coaddition technique that is optimal for any hypothesis testing and measurement (e.g., source detection, flux or shape measurements, and star/galaxy separation), in the background-noise-dominated case. This method has several important properties. The pixels of the resulting coadded image are uncorrelated. This image preserves all themore » information (from the original individual images) on all spatial frequencies. Any hypothesis testing or measurement that can be done on all the individual images simultaneously, can be done on the coadded image without any loss of information. The PSF of this image is typically as narrow, or narrower than the PSF of the best image in the ensemble. Moreover, this image is practically indistinguishable from a regular single image, meaning that any code that measures any property on a regular astronomical image can be applied to it unchanged. In particular, the optimal source detection statistic derived in Paper I is reproduced by matched filtering this image with its own PSF. This coaddition process, which we call proper coaddition, can be understood as the maximum signal-to-noise ratio measurement of the Fourier transform of the image, weighted in such a way that the noise in the entire Fourier domain is of equal variance. This method has important implications for multi-epoch seeing-limited deep surveys, weak lensing galaxy shape measurements, and diffraction-limited imaging via speckle observations. The last topic will be covered in depth in future papers. We provide an implementation of this algorithm in MATLAB.« less
Measurement of wood/plant cell or composite material attributes with computer assisted tomography
West, Darrell C.; Paulus, Michael J.; Tuskan, Gerald A.; Wimmer, Rupert
2004-06-08
A method for obtaining wood-cell attributes from cellulose containing samples includes the steps of radiating a cellulose containing sample with a beam of radiation. Radiation attenuation information is collected from radiation which passes through the sample. The source is rotated relative to the sample and the radiation and collecting steps repeated. A projected image of the sample is formed from the collected radiation attenuation information, the projected image including resolvable features of the cellulose containing sample. Cell wall thickness, cell diameter (length) and cell vacoule diameter can be determined. A system for obtaining physical measures from cellulose containing samples includes a radiation source, a radiation detector, and structure for rotating the source relative to said sample. The system forms an image of the sample from the radiation attenuation information, the image including resolvable features of the sample.
Multiscale infrared and visible image fusion using gradient domain guided image filtering
NASA Astrophysics Data System (ADS)
Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia
2018-03-01
For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.
IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319
IQM: an extensible and portable open source application for image and signal analysis in Java.
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.
An iterative method for near-field Fresnel region polychromatic phase contrast imaging
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2017-07-01
We present an iterative method for polychromatic phase contrast imaging that is suitable for broadband illumination and which allows for the quantitative determination of the thickness of an object given the refractive index of the sample material. Experimental and simulation results suggest the iterative method provides comparable image quality and quantitative object thickness determination when compared to the analytical polychromatic transport of intensity and contrast transfer function methods. The ability of the iterative method to work over a wider range of experimental conditions means the iterative method is a suitable candidate for use with polychromatic illumination and may deliver more utility for laboratory-based x-ray sources, which typically have a broad spectrum.
Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment
Mitchel, J.A.; Martin, I.S.
2013-01-01
A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629
The use of photostimulable phosphor systems for periodic quality assurance in radiotherapy.
Conte, L; Bianchi, C; Cassani, E; Monciardini, M; Mordacchini, C; Novario, R; Strocchi, S; Stucchi, P; Tanzi, F
2008-03-01
The fusion of radiological and optical images can be achieved through charging a photostimulable phosphor plate (PSP) with an exposure to a field of X- or gamma-rays, followed by exposure to an optical image which discharges the plate in relation to the amount of incident light. According to this PSP characteristic, we developed a simple method for periodic quality assurance (QA) of light/radiation field coincidence, distance indicator, field size indicators, crosshair centering, coincidence of radiation and mechanical isocenter for linear accelerators. The geometrical accuracy of radiological units can be subjected to the same QA method. Further, the source position accuracy for an HDR remote afterloader can be checked by taking an autoradiography of the radioactive source and simultaneously an optical image of a reference geometrical system.
Astronomy with the Color Blind
ERIC Educational Resources Information Center
Smith, Donald A.; Melrose, Justyn
2014-01-01
The standard method to create dramatic color images in astrophotography is to record multiple black and white images, each with a different color filter in the optical path, and then tint each frame with a color appropriate to the corresponding filter. When combined, the resulting image conveys information about the sources of emission in the…
Development of a Carbon Nanotube-Based Micro-CT and its Applications in Preclinical Research
NASA Astrophysics Data System (ADS)
Burk, Laurel May
Due to the dependence of researchers on mouse models for the study of human disease, diagnostic tools available in the clinic must be modified for use on these much smaller subjects. In addition to high spatial resolution, cardiac and lung imaging of mice presents extreme temporal challenges, and physiological gating methods must be developed in order to image these organs without motion blur. Commercially available micro-CT imaging devices are equipped with conventional thermionic x-ray sources and have a limited temporal response and are not ideal for in vivo small animal studies. Recent development of a field-emission x-ray source with carbon nanotube (CNT) cathode in our lab presented the opportunity to create a micro-CT device well-suited for in vivo lung and cardiac imaging of murine models for human disease. The goal of this thesis work was to present such a device, to develop and refine protocols which allow high resolution in vivo imaging of free-breathing mice, and to demonstrate the use of this new imaging tool for the study many different disease models. In Chapter 1, I provide background information about x-rays, CT imaging, and small animal micro-CT. In Chapter 2, CNT-based x-ray sources are explained, and details of a micro-focus x-ray tube specialized for micro-CT imaging are presented. In Chapter 3, the first and second generation CNT micro-CT devices are characterized, and successful respiratory- and cardiac-gated live animal imaging on normal, wild-type mice is achieved. In Chapter 4, respiratory-gated imaging of mouse disease models is demonstrated, limitations to the method are discussed, and a new contactless respiration sensor is presented which addresses many of these limitations. In Chapter 5, cardiac-gated imaging of disease models is demonstrated, including studies of aortic calcification, left ventricular hypertrophy, and myocardial infarction. In Chapter 6, several methods for image and system improvement are explored, and radiation therapy-related micro-CT imaging is present. Finally, in Chapter 7 I discuss future directions for this research and for the CNT micro-CT.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
Optical nulling apparatus and method for testing an optical surface
NASA Technical Reports Server (NTRS)
Olczak, Eugene (Inventor); Hannon, John J. (Inventor); Dey, Thomas W. (Inventor); Jensen, Arthur E. (Inventor)
2008-01-01
An optical nulling apparatus for testing an optical surface includes an aspheric mirror having a reflecting surface for imaging light near or onto the optical surface under test, where the aspheric mirror is configured to reduce spherical aberration of the optical surface under test. The apparatus includes a light source for emitting light toward the aspheric mirror, the light source longitudinally aligned with the aspheric mirror and the optical surface under test. The aspheric mirror is disposed between the light source and the optical surface under test, and the emitted light is reflected off the reflecting surface of the aspheric mirror and imaged near or onto the optical surface under test. An optical measuring device is disposed between the light source and the aspheric mirror, where light reflected from the optical surface under test enters the optical measuring device. An imaging mirror is disposed longitudinally between the light source and the aspheric mirror, and the imaging mirror is configured to again reflect light, which is first reflected from the reflecting surface of the aspheric mirror, onto the optical surface under test.
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc
2016-10-01
Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.
Near-IR and CP-OCT Imaging of Suspected Occlusal Caries Lesions
Simon, Jacob C.; Kang, Hobin; Staninec, Michal; Jang, Andrew T.; Chan, Kenneth H.; Darling, Cynthia L.; Lee, Robert C.; Fried, Daniel
2017-01-01
Introduction Radiographic methods have poor sensitivity for occlusal lesions and by the time the lesions are radiolucent they have typically progressed deep into the dentin. New more sensitive imaging methods are needed to detect occlusal lesions. In this study, cross-polarization optical coherence tomography (CP-OCT) and near-IR imaging were used to image questionable occlusal lesions (QOC's) that were not visible on radiographs but had been scheduled for restoration on 30 test subjects. Methods Near-IR reflectance and transillumination probes incorporating a high definition InGaAs camera and near-IR broadband light sources were used to acquire images of the lesions before restoration. The reflectance probe utilized cross-polarization and operated at wavelengths from 1500–1700-nm where there is an increase in water absorption for higher contrast. The transillumination probe was operated at 1300-nm where the transparency of enamel is highest. Tomographic images (6×6×7 mm3) of the lesions were acquired using a high-speed swept-source CP-OCT system operating at 1300-nm before and after removal of the suspected lesion. Results Near-IR reflectance imaging at 1500–1700-nm yielded significantly higher contrast (p<0.05) of the demineralization in the occlusal grooves compared with visible reflectance imaging. Stains in the occlusal grooves greatly reduced the lesion contrast in the visible range yielding negative values. Only half of the 26 lesions analyzed showed the characteristic surface demineralization and increased reflectivity below the dentinal-enamel junction (DEJ) in 3D OCT images indicative of penetration of the lesion into the dentin. Conclusion This study demonstrates that near-IR imaging methods have great potential for improving the early diagnosis of occlusal lesions. PMID:28339115
Design of an Image Fusion Phantom for a Small Animal microPET/CT Scanner Prototype
NASA Astrophysics Data System (ADS)
Nava-García, Dante; Alva-Sánchez, Héctor; Murrieta-Rodríguez, Tirso; Martínez-Dávalos, Arnulfo; Rodríguez-Villafuerte, Mercedes
2010-12-01
Two separate microtomography systems recently developed at Instituto de Física, UNAM, produce anatomical (microCT) and physiological images (microPET) of small animals. In this work, the development and initial tests of an image fusion method based on fiducial markers for image registration between the two modalities are presented. A modular Helix/Line-Sources phantom was designed and constructed; this phantom contains fiducial markers that can be visualized in both imaging systems. The registration was carried out by solving the rigid body alignment problem of Procrustes to obtain rotation and translation matrices required to align the two sets of images. The microCT/microPET image fusion of the Helix/Line-Sources phantom shows excellent visual coincidence between different structures, showing a calculated target-registration-error of 0.32 mm.
NASA Astrophysics Data System (ADS)
Luo, Chun-Ling; Zhuo, Ling-Qing
2017-01-01
Imaging through atmospheric turbulence is a topic with a long history and grand challenges still exist in the remote sensing and astro observation fields. In this letter, we try to propose a simple scheme to improve the resolution of imaging through turbulence based on the computational ghost imaging (CGI) and computational ghost diffraction (CGD) setup via the laser beam shaping techniques. A unified theory of CGI and CGD through turbulence with the multi-Gaussian shaped incoherent source is developed, and numerical examples are given to see clearly the effects of the system parameters to CGI and CGD. Our results show that the atmospheric effect to the CGI and CGD system is closely related to the propagation distance between the source and the object. In addition, by properly increasing the beam order of the multi-Gaussian source, we can improve the resolution of CGI and CGD through turbulence relative to the commonly used Gaussian source. Therefore our results may find applications in remote sensing and astro observation.
Tomographic gamma ray apparatus and method
Anger, Hal O.
1976-09-07
This invention provides a radiation detecting apparatus for imaging the distribution of radioactive substances in a three-dimensional subject such as a medical patient. Radiating substances introduced into the subject are viewed by a radiation image detector that provides an image of the distribution of radiating sources within its field of view. By viewing the area of interest from two or more positions, as by scanning the detector over the area, the radiating sources seen by the detector have relative positions that are a function of their depth in the subject. The images seen by the detector are transformed into first output signals which are combined in a readout device with second output signals that indicate the position of the detector relative to the subject. The readout device adjusts the signals and provides multiple radiation distribution readouts of the subject, each readout comprising a sharply resolved picture that shows the distribution and intensity of radiating sources lying in a selected plane in the subject, while sources lying on other planes are blurred in that particular readout.
SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, S; Kaye, W; Jaworski, J
2015-06-15
Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinholemore » camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for various applications worldwide, including proton therapy imaging R&D.« less
Apparatus and method for generating partially coherent illumination for photolithography
Sweatt, W.C.
1999-07-06
The present invention relates an apparatus and method for creating a bright, uniform source of partially coherent radiation for illuminating a pattern, in order to replicate an image of said pattern with a high degree of acuity. The present invention introduces a novel scatter plate into the optical path of source light used for illuminating a replicated object. The scatter plate has been designed to interrupt a focused, incoming light beam by introducing between about 8 to 24 diffraction zones blazed onto the surface of the scatter plate which intercept the light and redirect it to a like number of different positions in the condenser entrance pupil each of which is determined by the relative orientation and the spatial frequency of the diffraction grating in each of the several zones. Light falling onto the scatter plate, therefore, generates a plurality of unphased sources of illumination as seen by the back half of the optical system. The system includes a high brightness source, such as a laser, creating light which is taken up by a beam forming optic which focuses the incoming light into a condenser which in turn, focuses light into a field lens creating Kohler illumination image of the source in a camera entrance pupil. The light passing through the field lens illuminates a mask which interrupts the source light as either a positive or negative image of the object to be replicated. Light passing by the mask is focused into the entrance pupil of the lithographic camera creating an image of the mask onto a receptive media. 7 figs.
Apparatus and method for generating partially coherent illumination for photolithography
Sweatt, William C.
1999-01-01
The present invention relates an apparatus and method for creating a bright, uniform source of partially coherent radiation for illuminating a pattern, in order to replicate an image of said pattern with a high degree of acuity. The present invention introduces a novel scatter plate into the optical path of source light used for illuminating a replicated object. The scatter plate has been designed to interrupt a focused, incoming light beam by introducing between about 8 to 24 diffraction zones blazed onto the surface of the scatter plate which intercept the light and redirect it to a like number of different positions in the condenser entrance pupil each of which is determined by the relative orientation and the spatial frequency of the diffraction grating in each of the several zones. Light falling onto the scatter plate, therefore, generates a plurality of unphased sources of illumination as seen by the back half of the optical system. The system includes a high brightness source, such as a laser, creating light which is taken up by a beam forming optic which focuses the incoming light into a condenser which in turn, focuses light into a field lens creating Kohler illumination image of the source in a camera entrance pupil. The light passing through the field lens illuminates a mask which interrupts the source light as either a positive or negative image of the object to be replicated. Light passing by the mask is focused into the entrance pupil of the lithographic camera creating an image of the mask onto a receptive media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jia; Christner, Jodie A.; Duan Xinhui
2012-11-15
Purpose: To estimate attenuation using cross sectional CT images and scanned projection radiograph (SPR) images in a series of thorax and abdomen phantoms. Methods: Attenuation was quantified in terms of a water cylinder with cross sectional area of A{sub w} from both the CT and SPR images of abdomen and thorax phantoms, where A{sub w} is the area of a water cylinder that would absorb the same dose as the specified phantom. SPR and axial CT images were acquired using a dual-source CT scanner operated at 120 kV in single-source mode. To use the SPR image for estimating A{sub w},more » the pixel values of a SPR image were calibrated to physical water attenuation using a series of water phantoms. A{sub w} and the corresponding diameter D{sub w} were calculated using the derived attenuation-based methods (from either CT or SPR image). A{sub w} was also calculated using only geometrical dimensions of the phantoms (anterior-posterior and lateral dimensions or cross sectional area). Results: For abdomen phantoms, the geometry-based and attenuation-based methods gave similar results for D{sub w}. Using only geometric parameters, an overestimation of D{sub w} ranging from 4.3% to 21.5% was found for thorax phantoms. Results for D{sub w} using the CT image and SPR based methods agreed with each other within 4% on average in both thorax and abdomen phantoms. Conclusions: Either the cross sectional CT or SPR images can be used to estimate patient attenuation in CT. Both are more accurate than use of only geometrical information for the task of quantifying patient attenuation. The SPR based method requires calibration of SPR pixel values to physical water attenuation and this calibration would be best performed by the scanner manufacturer.« less
Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.
2011-01-01
Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724
Automated motion artifact removal for intravital microscopy, without a priori information.
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-03-28
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.
Automated motion artifact removal for intravital microscopy, without a priori information
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-01-01
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021
Tagged Neutron Source for API Inspection Systems with Greatly Enhanced Spatial Resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-06-04
We recently developed induced fission and transmission imaging methods with time- and directionally-tagged neutrons offer new capabilities for characterization of fissile material configurations and enhanced detection of special nuclear materials (SNM). An Advanced Associated Particle Imaging (API) generator with higher angular resolution and neutron yield than existing systems is needed to fully exploit these methods.
Vu, Cung; Nihei, Kurt T.; Schmitt, Denis P.; Skelt, Christopher; Johnson, Paul A.; Guyer, Robert; TenCate, James A.; Le Bas, Pierre-Yves
2013-01-01
In some aspects of the disclosure, a method for creating three-dimensional images of non-linear properties and the compressional to shear velocity ratio in a region remote from a borehole using a conveyed logging tool is disclosed. In some aspects, the method includes arranging a first source in the borehole and generating a steered beam of elastic energy at a first frequency; arranging a second source in the borehole and generating a steerable beam of elastic energy at a second frequency, such that the steerable beam at the first frequency and the steerable beam at the second frequency intercept at a location away from the borehole; receiving at the borehole by a sensor a third elastic wave, created by a three wave mixing process, with a frequency equal to a difference between the first and second frequencies and a direction of propagation towards the borehole; determining a location of a three wave mixing region based on the arrangement of the first and second sources and on properties of the third wave signal; and creating three-dimensional images of the non-linear properties using data recorded by repeating the generating, receiving and determining at a plurality of azimuths, inclinations and longitudinal locations within the borehole. The method is additionally used to generate three dimensional images of the ratio of compressional to shear acoustic velocity of the same volume surrounding the borehole.
Phase retrieval by coherent modulation imaging.
Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.
Beyond crystallography: diffractive imaging using coherent x-ray light sources.
Miao, Jianwei; Ishikawa, Tetsuya; Robinson, Ian K; Murnane, Margaret M
2015-05-01
X-ray crystallography has been central to the development of many fields of science over the past century. It has now matured to a point that as long as good-quality crystals are available, their atomic structure can be routinely determined in three dimensions. However, many samples in physics, chemistry, materials science, nanoscience, geology, and biology are noncrystalline, and thus their three-dimensional structures are not accessible by traditional x-ray crystallography. Overcoming this hurdle has required the development of new coherent imaging methods to harness new coherent x-ray light sources. Here we review the revolutionary advances that are transforming x-ray sources and imaging in the 21st century. Copyright © 2015, American Association for the Advancement of Science.
Spot size measurement of a flash-radiography source using the pinhole imaging method
NASA Astrophysics Data System (ADS)
Wang, Yi; Li, Qin; Chen, Nan; Cheng, Jin-Ming; Xie, Yu-Tong; Liu, Yun-Long; Long, Quan-Hong
2016-07-01
The spot size of the X-ray source is a key parameter of a flash-radiography facility, and is usually quoted as an evaluation of the resolving power. The pinhole imaging technique is applied to measure the spot size of the Dragon-I linear induction accelerator, by which a two-dimensional spatial distribution of the source spot is obtained. Experimental measurements are performed to measure the spot image when the transportation and focusing of the electron beam are tuned by adjusting the currents of solenoids in the downstream section. The spot size of full-width at half maximum and that defined from the spatial frequency at half peak value of the modulation transfer function are calculated and discussed.
Holographic imaging with a Shack-Hartmann wavefront sensor.
Gong, Hai; Soloviev, Oleg; Wilding, Dean; Pozzi, Paolo; Verhaegen, Michel; Vdovin, Gleb
2016-06-27
A high-resolution Shack-Hartmann wavefront sensor has been used for coherent holographic imaging, by computer reconstruction and propagation of the complex field in a lensless imaging setup. The resolution of the images obtained with the experimental data is in a good agreement with the diffraction theory. Although a proper calibration with a reference beam improves the image quality, the method has a potential for reference-less holographic imaging with spatially coherent monochromatic and narrowband polychromatic sources in microscopy and imaging through turbulence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, S; Meredith, R; Azure, M
Purpose: To support the phase I trial for toxicity, biodistribution and pharmacokinetics of intra-peritoneal (IP) 212Pb-TCMC-trastuzumab in patients with HER-2 expressing malignancy. A whole body gamma camera imaging method was developed for estimating amount of 212Pb-TCMC-trastuzumab left in the peritoneal cavity. Methods: {sup 212}Pb decays to {sup 212}Bi via beta emission. {sup 212}Bi emits an alpha particle at an average of 6.1 MeV. The 238.6 keV gamma ray with a 43.6% yield can be exploited for imaging. Initial phantom was made of saline bags with 212Pb. Images were collected for 238.6 keV with a medium energy general purpose collimator. Theremore » are other high energy gamma emissions (e.g. 511keV, 8%; 583 keV, 31%) that penetrate the septae of the collimator and contribute scatter into 238.6 keV. An upper scatter window was used for scatter correction for these high energy gammas. Results: A small source containing 212Pb can be easily visualized. Scatter correction on images of a small 212Pb source resulted in a ∼50% reduction in the full width at tenth maximum (FWTM), while change in full width at half maximum (FWHM) was <10%. For photopeak images, substantial scatter around phantom source extended to > 5 cm outside; scatter correction improved image contrast by removing this scatter around the sources. Patient imaging, in the 1st cohort (n=3) showed little redistribution of 212Pb-TCMC-trastuzumab out of the peritoneal cavity. Compared to the early post-treatment images, the 18-hour post-injection images illustrated the shift to more uniform anterior/posterior abdominal distribution and the loss of intensity due to radioactive decay. Conclusion: Use of medium energy collimator, 15% width of 238.6 keV photopeak, and a 7.5% upper scatter window is adequate for quantification of 212Pb radioactivity inside peritoneal cavity for alpha radioimmunotherapy of ovarian cancer. Research Support: AREVA Med, NIH 1UL1RR025777-01.« less
NASA Astrophysics Data System (ADS)
Herbonnet, Ricardo; Buddendiek, Axel; Kuijken, Konrad
2017-03-01
Context. Current optical imaging surveys for cosmology cover large areas of sky. Exploiting the statistical power of these surveys for weak lensing measurements requires shape measurement methods with subpercent systematic errors. Aims: We introduce a new weak lensing shear measurement algorithm, shear nulling after PSF Gaussianisation (SNAPG), designed to avoid the noise biases that affect most other methods. Methods: SNAPG operates on images that have been convolved with a kernel that renders the point spread function (PSF) a circular Gaussian, and uses weighted second moments of the sources. The response of such second moments to a shear of the pre-seeing galaxy image can be predicted analytically, allowing us to construct a shear nulling scheme that finds the shear parameters for which the observed galaxies are consistent with an unsheared, isotropically oriented population of sources. The inverse of this nulling shear is then an estimate of the gravitational lensing shear. Results: We identify the uncertainty of the estimated centre of each galaxy as the source of noise bias, and incorporate an approximate estimate of the centroid covariance into the scheme. We test the method on extensive suites of simulated galaxies of increasing complexity, and find that it is capable of shear measurements with multiplicative bias below 0.5 percent.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
DECONVOLUTION OF IMAGES FROM BLAST 2005: INSIGHT INTO THE K3-50 AND IC 5146 STAR-FORMING REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Arabindo; Netterfield, Calvin B.; Ade, Peter A. R.
2011-04-01
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed itsmore » performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4.'5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and {sup 12}CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below {approx} 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.« less
Deconvolution of Images from BLAST 2005: Insight into the K3-50 and IC 5146 Star-forming Regions
NASA Astrophysics Data System (ADS)
Roy, Arabindo; Ade, Peter A. R.; Bock, James J.; Brunt, Christopher M.; Chapin, Edward L.; Devlin, Mark J.; Dicker, Simon R.; France, Kevin; Gibb, Andrew G.; Griffin, Matthew; Gundersen, Joshua O.; Halpern, Mark; Hargrave, Peter C.; Hughes, David H.; Klein, Jeff; Marsden, Gaelen; Martin, Peter G.; Mauskopf, Philip; Netterfield, Calvin B.; Olmi, Luca; Patanchon, Guillaume; Rex, Marie; Scott, Douglas; Semisch, Christopher; Truch, Matthew D. P.; Tucker, Carole; Tucker, Gregory S.; Viero, Marco P.; Wiebe, Donald V.
2011-04-01
We present an implementation of the iterative flux-conserving Lucy-Richardson (L-R) deconvolution method of image restoration for maps produced by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Compared to the direct Fourier transform method of deconvolution, the L-R operation restores images with better-controlled background noise and increases source detectability. Intermediate iterated images are useful for studying extended diffuse structures, while the later iterations truly enhance point sources to near the designed diffraction limit of the telescope. The L-R method of deconvolution is efficient in resolving compact sources in crowded regions while simultaneously conserving their respective flux densities. We have analyzed its performance and convergence extensively through simulations and cross-correlations of the deconvolved images with available high-resolution maps. We present new science results from two BLAST surveys, in the Galactic regions K3-50 and IC 5146, further demonstrating the benefits of performing this deconvolution. We have resolved three clumps within a radius of 4farcm5 inside the star-forming molecular cloud containing K3-50. Combining the well-resolved dust emission map with available multi-wavelength data, we have constrained the spectral energy distributions (SEDs) of five clumps to obtain masses (M), bolometric luminosities (L), and dust temperatures (T). The L-M diagram has been used as a diagnostic tool to estimate the evolutionary stages of the clumps. There are close relationships between dust continuum emission and both 21 cm radio continuum and 12CO molecular line emission. The restored extended large-scale structures in the Northern Streamer of IC 5146 have a strong spatial correlation with both SCUBA and high-resolution extinction images. A dust temperature of 12 K has been obtained for the central filament. We report physical properties of ten compact sources, including six associated protostars, by fitting SEDs to multi-wavelength data. All of these compact sources are still quite cold (typical temperature below ~ 16 K) and are above the critical Bonner-Ebert mass. They have associated low-power young stellar objects. Further evidence for starless clumps has also been found in the IC 5146 region.
A line-source method for aligning on-board and other pinhole SPECT systems.
Yan, Susu; Bowsher, James; Yin, Fang-Fang
2013-12-01
In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.
Fresnel zone plate light field spectral imaging simulation
NASA Astrophysics Data System (ADS)
Hallada, Francis D.; Franz, Anthony L.; Hawks, Michael R.
2017-05-01
Through numerical simulation, we have demonstrated a novel snapshot spectral imaging concept using binary diffractive optics. Binary diffractive optics, such as Fresnel zone plates (FZP) or photon sieves, can be used as the single optical element in a spectral imager that conducts both imaging and dispersion. In previous demonstrations of spectral imaging with diffractive optics, the detector array was physically translated along the optic axis to measure different image formation planes. In this new concept the wavelength-dependent images are constructed synthetically, by using integral photography concepts commonly applied to light field (plenoptic) cameras. Light field cameras use computational digital refocusing methods after exposure to make images at different object distances. Our concept refocuses to make images at different wavelengths instead of different object distances. The simulations in this study demonstrate this concept for an imager designed with a FZP. Monochromatic light from planar sources is propagated through the system to a measurement plane using wave optics in the Fresnel approximation. Simple images, placed at optical infinity, are illuminated by monochromatic sources and then digitally refocused to show different spectral bins. We show the formation of distinct images from different objects, illuminated by monochromatic sources in the VIS/NIR spectrum. Additionally, this concept could easily be applied to imaging in the MWIR and LWIR ranges. In conclusion, this new type of imager offers a rugged and simple optical design for snapshot spectral imaging and warrants further development.
Prestack reverse time migration for tilted transversely isotropic media
NASA Astrophysics Data System (ADS)
Jang, Seonghyung; Hien, Doan Huy
2013-04-01
According to having interest in unconventional resource plays, anisotropy problem is naturally considered as an important step for improving the seismic image quality. Although it is well known prestack depth migration for the seismic reflection data is currently one of the powerful tools for imaging complex geological structures, it may lead to migration error without considering anisotropy. Asymptotic analysis of wave propagation in transversely isotropic (TI) media yields a dispersion relation of couple P- and SV wave modes that can be converted to a fourth order scalar partial differential equation (PDE). By setting the shear wave velocity equal zero, the fourth order PDE, called an acoustic wave equation for TI media, can be reduced to couple of second order PDE systems and we try to solve the second order PDE by the finite difference method (FDM). The result of this P wavefield simulation is kinematically similar to elastic and anisotropic wavefield simulation. We develop prestack depth migration algorithm for tilted transversely isotropic media using reverse time migration method (RTM). RTM is a method for imaging the subsurface using inner product of source wavefield extrapolation in forward and receiver wavefield extrapolation in backward. We show the subsurface image in TTI media using the inner product of partial derivative wavefield with respect to physical parameters and observation data. Since the partial derivative wavefields with respect to the physical parameters require extremely huge computing time, so we implemented the imaging condition by zero lag crosscorrelation of virtual source and back propagating wavefield instead of partial derivative wavefields. The virtual source is calculated directly by solving anisotropic acoustic wave equation, the back propagating wavefield on the other hand is calculated by the shot gather used as the source function in the anisotropic acoustic wave equation. According to the numerical model test for a simple geological model including syncline and anticline, the prestack depth migration using TTI-RTM in weak anisotropic media shows the subsurface image is similar to the true geological model used to generate the shot gathers.
NASA Technical Reports Server (NTRS)
Kim, H.; Swain, P. H.
1991-01-01
A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.
Quantitative Evaluation of Hard X-ray Damage to Biological Samples using EUV Ptychography
NASA Astrophysics Data System (ADS)
Baksh, Peter; Odstrcil, Michal; Parsons, Aaron; Bailey, Jo; Deinhardt, Katrin; Chad, John E.; Brocklesby, William S.; Frey, Jeremy G.
2017-06-01
Coherent diffractive imaging (CDI) has become a standard method on a variety of synchrotron beam lines. The high brilliance short wavelength radiation from these sources can be used to reconstruct attenuation and relative phase of a sample with nanometre resolution via CDI methods. However, the interaction between the sample and high energy ionising radiation can cause degradation to sample structure. We demonstrate, using a laboratory based high harmonic generation (HHG) based extreme ultraviolet (EUV) source, imaging a sample of hippocampal neurons using the ptychography method. The significant increase in contrast of the sample in the EUV light allows identification of damage induced from exposure to 7.3 keV photons, without causing any damage to the sample itself.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping
2012-01-01
Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
Determining Object Orientation from a Single Image Using Multiple Information Sources.
1984-06-01
object surface. Location of the image ellipse is accomplished by exploiting knowledge about object boundaries and image intensity gradients . -. The...Using Intensity Gradient Information for Ellipse fitting ........ .51 4.3.7 Orientation From Ellipses .............................. 53 4.3.8 Application...object boundaries and image intensity gradients . The orientation information from each of these three methods is combined using a "plausibility" function
A novel method to detect shadows on multispectral images
NASA Astrophysics Data System (ADS)
Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem
2016-10-01
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Power, J F
2009-06-01
Light profile microscopy (LPM) is a direct method for the spectral depth imaging of thin film cross-sections on the micrometer scale. LPM uses a perpendicular viewing configuration that directly images a source beam propagated through a thin film. Images are formed in dark field contrast, which is highly sensitive to subtle interfacial structures that are invisible to reference methods. The independent focusing of illumination and imaging systems allows multiple registered optical sources to be hosted on a single platform. These features make LPM a powerful multi-contrast (MC) imaging technique, demonstrated in this work with six modes of imaging in a single instrument, based on (1) broad-band elastic scatter; (2) laser excited wideband luminescence; (3) coherent elastic scatter; (4) Raman scatter (three channels with RGB illumination); (5) wavelength resolved luminescence; and (6) spectral broadband scatter, resolved in immediate succession. MC-LPM integrates Raman images with a wider optical and morphological picture of the sample than prior art microprobes. Currently, MC-LPM resolves images at an effective spectral resolution better than 9 cm(-1), at a spatial resolution approaching 1 microm, with optics that operate in air at half the maximum numerical aperture of the prior art microprobes.
Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.
Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek
2017-08-24
This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.
Chen, Ming; Yu, Hengyong
2015-10-01
This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.
Target recognition and phase acquisition by using incoherent digital holographic imaging
NASA Astrophysics Data System (ADS)
Lee, Munseob; Lee, Byung-Tak
2017-05-01
In this study, we proposed the Incoherent Digital Holographic Imaging (IDHI) for recognition and phase information of dedicated target. Although recent development of a number of target recognition techniques such as LIDAR, there have limited success in target discrimination, in part due to low-resolution, low scanning speed, and computation power. In the paper, the proposed system consists of the incoherent light source, such as LED, Michelson interferometer, and digital CCD for acquisition of four phase shifting image. First of all, to compare with relative coherence, we used a source as laser and LED, respectively. Through numerical reconstruction by using the four phase shifting method and Fresnel diffraction method, we recovered the intensity and phase image of USAF resolution target apart from about 1.0m distance. In this experiment, we show 1.2 times improvement in resolution compared to conventional imaging. Finally, to confirm the recognition result of camouflaged targets with the same color from background, we carry out to test holographic imaging in incoherent light. In this result, we showed the possibility of a target detection and recognition that used three dimensional shape and size signatures, numerical distance from phase information of obtained holographic image.
Combination of surface and borehole seismic data for robust target-oriented imaging
NASA Astrophysics Data System (ADS)
Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees
2016-05-01
A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.
Computed tomographic images using tube source of x rays: interior properties of the material
NASA Astrophysics Data System (ADS)
Rao, Donepudi V.; Takeda, Tohoru; Itai, Yuji; Seltzer, S. M.; Hubbell, John H.; Zeniya, Tsutomu; Akatsuka, Takao; Cesareo, Roberto; Brunetti, Antonio; Gigante, Giovanni E.
2002-01-01
An image intensifier based computed tomography scanner and a tube source of x-rays are used to obtain the images of small objects, plastics, wood and soft materials in order to know the interior properties of the material. A new method is developed to estimate the degree of monochromacy, total solid angle, efficiency and geometrical effects of the measuring system and the way to produce monoenergetic radiation. The flux emitted by the x-ray tube is filtered using the appropriate filters at the chosen optimum energy and reasonable monochromacy is achieved and the images are acceptably distinct. Much attention has been focused on the imaging of small objects of weakly attenuating materials at optimum value. At optimum value it is possible to calculate the three-dimensional representation of inner and outer surfaces of the object. The image contrast between soft materials could be significantly enhanced by optimal selection of the energy of the x-rays by Monte Carlo methods. The imaging system is compact, reasonably economic, has a good contrast resolution, simple operation and routine availability and explores the use of optimizing tomography for various applications.
Forensic use of photo response non-uniformity of imaging sensors and a counter method.
Dirik, Ahmet Emir; Karaküçük, Ahmet
2014-01-13
Analogous to use of bullet scratches in forensic science, the authenticity of a digital image can be verified through the noise characteristics of an imaging sensor. In particular, photo-response non-uniformity noise (PRNU) has been used in source camera identification (SCI). However, this technique can be used maliciously to track or inculpate innocent people. To impede such tracking, PRNU noise should be suppressed significantly. Based on this motivation, we propose a counter forensic method to deceive SCI. Experimental results show that it is possible to impede PRNU-based camera identification for various imaging sensors while preserving the image quality.
A method to generate soft shadows using a layered depth image and warping.
Im, Yeon-Ho; Han, Chang-Young; Kim, Lee-Sup
2005-01-01
We present an image-based method for propagating area light illumination through a Layered Depth Image (LDI) to generate soft shadows from opaque and nonrefractive transparent objects. In our approach, using the depth peeling technique, we render an LDI from a reference light sample on a planar light source. Light illumination of all pixels in an LDI is then determined for all the other sample points via warping, an image-based rendering technique, which approximates ray tracing in our method. We use an image-warping equation and McMillan's warp ordering algorithm to find the intersections between rays and polygons and to find the order of intersections. Experiments for opaque and nonrefractive transparent objects are presented. Results indicate our approach generates soft shadows fast and effectively. Advantages and disadvantages of the proposed method are also discussed.
Non-contact method of search and analysis of pulsating vessels
NASA Astrophysics Data System (ADS)
Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.
2018-04-01
Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.
Speckle reduction of OCT images using an adaptive cluster-based filtering
NASA Astrophysics Data System (ADS)
Adabi, Saba; Rashedi, Elaheh; Conforto, Silvia; Mehregan, Darius; Xu, Qiuyun; Nasiriavanaki, Mohammadreza
2017-02-01
Optical coherence tomography (OCT) has become a favorable device in the dermatology discipline due to its moderate resolution and penetration depth. OCT images however contain grainy pattern, called speckle, due to the broadband source that has been used in the configuration of OCT. So far, a variety of filtering techniques is introduced to reduce speckle in OCT images. Most of these methods are generic and can be applied to OCT images of different tissues. In this paper, we present a method for speckle reduction of OCT skin images. Considering the architectural structure of skin layers, it seems that a skin image can benefit from being segmented in to differentiable clusters, and being filtered separately in each cluster by using a clustering method and filtering methods such as Wiener. The proposed algorithm was tested on an optical solid phantom with predetermined optical properties. The algorithm was also tested on healthy skin images. The results show that the cluster-based filtering method can reduce the speckle and increase the signal-to-noise ratio and contrast while preserving the edges in the image.
Bidirectional light-scattering image processing method for high-concentration jet sprays
NASA Astrophysics Data System (ADS)
Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.
1985-01-01
In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yidong, E-mail: yidongyang@med.miami.edu; Wang, Ken Kang-Hsin; Wong, John W.
2015-04-15
Purpose: The cone beam computed tomography (CBCT) guided small animal radiation research platform (SARRP) has been developed for focal tumor irradiation, allowing laboratory researchers to test basic biological hypotheses that can modify radiotherapy outcomes in ways that were not feasible previously. CBCT provides excellent bone to soft tissue contrast, but is incapable of differentiating tumors from surrounding soft tissue. Bioluminescence tomography (BLT), in contrast, allows direct visualization of even subpalpable tumors and quantitative evaluation of tumor response. Integration of BLT with CBCT offers complementary image information, with CBCT delineating anatomic structures and BLT differentiating luminescent tumors. This study is tomore » develop a systematic method to calibrate an integrated CBCT and BLT imaging system which can be adopted onboard the SARRP to guide focal tumor irradiation. Methods: The integrated imaging system consists of CBCT, diffuse optical tomography (DOT), and BLT. The anatomy acquired from CBCT and optical properties acquired from DOT serve as a priori information for the subsequent BLT reconstruction. Phantoms were designed and procedures were developed to calibrate the CBCT, DOT/BLT, and the entire integrated system. Geometrical calibration was performed to calibrate the CBCT system. Flat field correction was performed to correct the nonuniform response of the optical imaging system. Absolute emittance calibration was performed to convert the camera readout to the emittance at the phantom or animal surface, which enabled the direct reconstruction of the bioluminescence source strength. Phantom and mouse imaging were performed to validate the calibration. Results: All calibration procedures were successfully performed. Both CBCT of a thin wire and a euthanized mouse revealed no spatial artifact, validating the accuracy of the CBCT calibration. The absolute emittance calibration was validated with a 650 nm laser source, resulting in a 3.0% difference between simulated and measured signal. The calibration of the entire system was confirmed through the CBCT and BLT reconstruction of a bioluminescence source placed inside a tissue-simulating optical phantom. Using a spatial region constraint, the source position was reconstructed with less than 1 mm error and the source strength reconstructed with less than 24% error. Conclusions: A practical and systematic method has been developed to calibrate an integrated x-ray and optical tomography imaging system, including the respective CBCT and optical tomography system calibration and the geometrical calibration of the entire system. The method can be modified and adopted to calibrate CBCT and optical tomography systems that are operated independently or hybrid x-ray and optical tomography imaging systems.« less
Yang, Yidong; Wang, Ken Kang-Hsin; Eslami, Sohrab; Iordachita, Iulian I.; Patterson, Michael S.; Wong, John W.
2015-01-01
Purpose: The cone beam computed tomography (CBCT) guided small animal radiation research platform (SARRP) has been developed for focal tumor irradiation, allowing laboratory researchers to test basic biological hypotheses that can modify radiotherapy outcomes in ways that were not feasible previously. CBCT provides excellent bone to soft tissue contrast, but is incapable of differentiating tumors from surrounding soft tissue. Bioluminescence tomography (BLT), in contrast, allows direct visualization of even subpalpable tumors and quantitative evaluation of tumor response. Integration of BLT with CBCT offers complementary image information, with CBCT delineating anatomic structures and BLT differentiating luminescent tumors. This study is to develop a systematic method to calibrate an integrated CBCT and BLT imaging system which can be adopted onboard the SARRP to guide focal tumor irradiation. Methods: The integrated imaging system consists of CBCT, diffuse optical tomography (DOT), and BLT. The anatomy acquired from CBCT and optical properties acquired from DOT serve as a priori information for the subsequent BLT reconstruction. Phantoms were designed and procedures were developed to calibrate the CBCT, DOT/BLT, and the entire integrated system. Geometrical calibration was performed to calibrate the CBCT system. Flat field correction was performed to correct the nonuniform response of the optical imaging system. Absolute emittance calibration was performed to convert the camera readout to the emittance at the phantom or animal surface, which enabled the direct reconstruction of the bioluminescence source strength. Phantom and mouse imaging were performed to validate the calibration. Results: All calibration procedures were successfully performed. Both CBCT of a thin wire and a euthanized mouse revealed no spatial artifact, validating the accuracy of the CBCT calibration. The absolute emittance calibration was validated with a 650 nm laser source, resulting in a 3.0% difference between simulated and measured signal. The calibration of the entire system was confirmed through the CBCT and BLT reconstruction of a bioluminescence source placed inside a tissue-simulating optical phantom. Using a spatial region constraint, the source position was reconstructed with less than 1 mm error and the source strength reconstructed with less than 24% error. Conclusions: A practical and systematic method has been developed to calibrate an integrated x-ray and optical tomography imaging system, including the respective CBCT and optical tomography system calibration and the geometrical calibration of the entire system. The method can be modified and adopted to calibrate CBCT and optical tomography systems that are operated independently or hybrid x-ray and optical tomography imaging systems. PMID:25832060
Zhang, Zeng-yan; Ji, Te; Zhu, Zhi-yong; Zhao, Hong-wei; Chen, Min; Xiao, Ti-qiao; Guo, Zhi
2015-01-01
Terahertz radiation is an electromagnetic radiation in the range between millimeter waves and far infrared. Due to its low energy and non-ionizing characters, THz pulse imaging emerges as a novel tool in many fields, such as material, chemical, biological medicine, and food safety. Limited spatial resolution is a significant restricting factor of terahertz imaging technology. Near field imaging method was proposed to improve the spatial resolution of terahertz system. Submillimeter scale's spauial resolution can be achieved if the income source size is smaller than the wawelength of the incoming source and the source is very close to the sample. But many changes were needed to the traditional terahertz time domain spectroscopy system, and it's very complex to analyze sample's physical parameters through the terahertz signal. A method of inserting a pinhole upstream to the sample was first proposed in this article to improve the spatial resolution of traditional terahertz time domain spectroscopy system. The measured spatial resolution of terahertz time domain spectroscopy system by knife edge method can achieve spatial resolution curves. The moving stage distance between 10 % and 90 Yo of the maximum signals respectively was defined as the, spatial resolution of the system. Imaging spatial resolution of traditional terahertz time domain spectroscopy system was improved dramatically after inserted a pinhole with diameter 0. 5 mm, 2 mm upstream to the sample. Experimental results show that the spatial resolution has been improved from 1. 276 mm to 0. 774 mm, with the increment about 39 %. Though this simple method, the spatial resolution of traditional terahertz time domain spectroscopy system was increased from millimeter scale to submillimeter scale. A pinhole with diameter 1 mm on a polyethylene plate was taken as sample, to terahertz imaging study. The traditional terahertz time domain spectroscopy system and pinhole inserted terahertz time domain spectroscopy system were applied in the imaging experiment respectively. The relative THz-power loss imaging of samples were use in this article. This method generally delivers the best signal to noise ratio in loss images, dispersion effects are cancelled. Terahertz imaging results show that the sample's boundary was more distinct after inserting the pinhole in front of, sample. The results also conform that inserting pinhole in front of sample can improve the imaging spatial resolution effectively. The theoretical analyses of the method which improve the spatial resolution by inserting a pinhole in front of sample were given in this article. The analyses also indicate that the smaller the pinhole size, the longer spatial coherence length of the system, the better spatial resolution of the system. At the same time the terahertz signal will be reduced accordingly. All the experimental results and theoretical analyses indicate that the method of inserting a pinhole in front of sample can improve the spatial resolution of traditional terahertz time domain spectroscopy system effectively, and it will further expand the application of terahertz imaging technology.
In vivo bioluminescence tomography based on multi-view projection and 3D surface reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Wang, Kun; Leng, Chengcai; Deng, Kexin; Hu, Yifang; Tian, Jie
2015-03-01
Bioluminescence tomography (BLT) is a powerful optical molecular imaging modality, which enables non-invasive realtime in vivo imaging as well as 3D quantitative analysis in preclinical studies. In order to solve the inverse problem and reconstruct inner light sources accurately, the prior structural information is commonly necessary and obtained from computed tomography or magnetic resonance imaging. This strategy requires expensive hybrid imaging system, complicated operation protocol and possible involvement of ionizing radiation. The overall robustness highly depends on the fusion accuracy between the optical and structural information. In this study we present a pure optical bioluminescence tomographic system (POBTS) and a novel BLT method based on multi-view projection acquisition and 3D surface reconstruction. The POBTS acquired a sparse set of white light surface images and bioluminescent images of a mouse. Then the white light images were applied to an approximate surface model to generate a high quality textured 3D surface reconstruction of the mouse. After that we integrated multi-view luminescent images based on the previous reconstruction, and applied an algorithm to calibrate and quantify the surface luminescent flux in 3D.Finally, the internal bioluminescence source reconstruction was achieved with this prior information. A BALB/C mouse with breast tumor of 4T1-fLuc cells mouse model were used to evaluate the performance of the new system and technique. Compared with the conventional hybrid optical-CT approach using the same inverse reconstruction method, the reconstruction accuracy of this technique was improved. The distance error between the actual and reconstructed internal source was decreased by 0.184 mm.
NASA Astrophysics Data System (ADS)
Benfenati, A.; La Camera, A.; Carbillet, M.
2016-02-01
Aims: High-dynamic range images of astrophysical objects present some difficulties in their restoration because of the presence of very bright point-wise sources surrounded by faint and smooth structures. We propose a method that enables the restoration of this kind of images by taking these kinds of sources into account and, at the same time, improving the contrast enhancement in the final image. Moreover, the proposed approach can help to detect the position of the bright sources. Methods: The classical variational scheme in the presence of Poisson noise aims to find the minimum of a functional compound of the generalized Kullback-Leibler function and a regularization functional: the latter function is employed to preserve some characteristic in the restored image. The inexact Bregman procedure substitutes the regularization function with its inexact Bregman distance. This proposed scheme allows us to take under control the level of inexactness arising in the computed solution and permits us to employ an overestimation of the regularization parameter (which balances the trade-off between the Kullback-Leibler and the Bregman distance). This aspect is fundamental, since the estimation of this kind of parameter is very difficult in the presence of Poisson noise. Results: The inexact Bregman procedure is tested on a bright unresolved binary star with a faint circumstellar environment. When the sources' position is exactly known, this scheme provides us with very satisfactory results. In case of inexact knowledge of the sources' position, it can in addition give some useful information on the true positions. Finally, the inexact Bregman scheme can be also used when information about the binary star's position concerns a connected region instead of isolated pixels.
Sampson, David D.; Kennedy, Brendan F.
2017-01-01
High-resolution tactile imaging, superior to the sense of touch, has potential for future biomedical applications such as robotic surgery. In this paper, we propose a tactile imaging method, termed computational optical palpation, based on measuring the change in thickness of a thin, compliant layer with optical coherence tomography and calculating tactile stress using finite-element analysis. We demonstrate our method on test targets and on freshly excised human breast fibroadenoma, demonstrating a resolution of up to 15–25 µm and a field of view of up to 7 mm. Our method is open source and readily adaptable to other imaging modalities, such as ultrasonography and confocal microscopy. PMID:28250098
Blind source separation of ex-vivo aorta tissue multispectral images
Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson
2015-01-01
Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366
NASA Astrophysics Data System (ADS)
Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.
2016-03-01
Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.
Multiple Auto-Adapting Color Balancing for Large Number of Images
NASA Astrophysics Data System (ADS)
Zhou, X.
2015-04-01
This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.
Magnetic quadrupoles lens for hot spot proton imaging in inertial confinement fusion
NASA Astrophysics Data System (ADS)
Teng, J.; Gu, Y. Q.; Chen, J.; Zhu, B.; Zhang, B.; Zhang, T. K.; Tan, F.; Hong, W.; Zhang, B. H.; Wang, X. Q.
2016-08-01
Imaging of DD-produced protons from an implosion hot spot region by miniature permanent magnetic quadrupole (PMQ) lens is proposed. Corresponding object-image relation is deduced and an adjust method for this imaging system is discussed. Ideal point-to-point imaging demands a monoenergetic proton source; nevertheless, we proved that the blur of image induced by proton energy spread is a second order effect therefore controllable. A proton imaging system based on miniature PMQ lens is designed for 2.8 MeV DD-protons and the adjust method in case of proton energy shift is proposed. The spatial resolution of this system is better than 10 μm when proton yield is above 109 and the spectra width is within 10%.
Bindu, G.; Semenov, S.
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
NASA Technical Reports Server (NTRS)
Whitaker, Ross (Inventor); Turner, D. Clark (Inventor)
2016-01-01
Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.
3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.
Moses, Yael; Shimshoni, Ilan
2009-07-01
We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.
NASA Astrophysics Data System (ADS)
Vainshtein, Sergey N.; Duan, Guoyong; Mikhnev, Valeri A.; Zemlyakov, Valery E.; Egorkin, Vladimir I.; Kalyuzhnyy, Nikolay A.; Maleev, Nikolai A.; Näpänkangas, Juha; Sequeiros, Roberto Blanco; Kostamovaara, Juha T.
2018-05-01
Progress in terahertz spectroscopy and imaging is mostly associated with femtosecond laser-driven systems, while solid-state sources, mainly sub-millimetre integrated circuits, are still in an early development phase. As simple and cost-efficient an emitter as a Gunn oscillator could cause a breakthrough in the field, provided its frequency limitations could be overcome. Proposed here is an application of the recently discovered collapsing field domains effect that permits sub-THz oscillations in sub-micron semiconductor layers thanks to nanometer-scale powerfully ionizing domains arising due to negative differential mobility in extreme fields. This shifts the frequency limit by an order of magnitude relative to the conventional Gunn effect. Our first miniature picosecond pulsed sources cover the 100-200 GHz band and promise milliwatts up to ˜500 GHz. Thanks to the method of interferometrically enhanced time-domain imaging proposed here and the low single-shot jitter of ˜1 ps, our simple imaging system provides sufficient time-domain imaging contrast for fresh-tissue terahertz histology.
Volumetric Two-photon Imaging of Neurons Using Stereoscopy (vTwINS)
Song, Alexander; Charles, Adam S.; Koay, Sue Ann; Gauthier, Jeff L.; Thiberge, Stephan Y.; Pillow, Jonathan W.; Tank, David W.
2017-01-01
Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large scale recording of neural activity in vivo. Here we introduce volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS), a volumetric calcium imaging method that employs an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced “image pairs” in the resulting 2D image, and the separation distance between images is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a novel orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrate vTwINS by imaging neural population activity in mouse primary visual cortex and hippocampus. Our results demonstrate that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame-rate. PMID:28319111
Method and System for Temporal Filtering in Video Compression Systems
NASA Technical Reports Server (NTRS)
Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim
2011-01-01
Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.
NASA Astrophysics Data System (ADS)
Mondal, Indranil; Raj, Shipra; Roy, Poulomi; Poddar, Raju
2018-01-01
We present noninvasive three-dimensional depth-resolved imaging of animal tissue with a swept-source optical coherence tomography system at 1064 nm center wavelength and silver nanoparticles (AgNPs) as a potential contrast agent. A swept-source laser light source is used to enable an imaging rate of 100 kHz (100 000 A-scans s-1). Swept-source optical coherence tomography is a new variant of the optical coherence tomography (OCT) technique, offering unique advantages in terms of sensitivity, reduction of motion artifacts, etc. To enhance the contrast of an OCT image, AgNPs are utilized as an exogeneous contrast agent. AgNPs are synthesized using a modified Tollens method and characterization is done by UV-vis spectroscopy, dynamic light scattering, scanning electron microscopy and energy dispersive x-ray spectroscopy. In vitro imaging of chicken breast tissue, with and without the application of AgNPs, is performed. The effect of AgNPs is studied with different exposure times. A mathematical model is also built to calculate changes in the local scattering coefficient of tissue from OCT images. A quantitative estimation of scattering coefficient and contrast is performed for tissues with and without application of AgNPs. Significant improvement in contrast and increase in scattering coefficient with time is observed.
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
System and method for disrupting suspect objects
Gladwell, T. Scott; Garretson, Justin R; Hobart, Clinton G; Monda, Mark J
2013-07-09
A system and method for disrupting at least one component of a suspect object is provided. The system includes a source for passing radiation through the suspect object, a screen for receiving the radiation passing through the suspect object and generating at least one image therefrom, a weapon having a discharge deployable therefrom, and a targeting unit. The targeting unit displays the image(s) of the suspect object and aims the weapon at a disruption point on the displayed image such that the weapon may be positioned to deploy the discharge at the disruption point whereby the suspect object is disabled.
Estimating Slopes In Images Of Terrain By Use Of BRDF
NASA Technical Reports Server (NTRS)
Scholl, Marija S.
1995-01-01
Proposed method of estimating slopes of terrain features based on use of bidirectional reflectivity distribution function (BRDF) in analyzing aerial photographs, satellite video images, or other images produced by remote sensors. Estimated slopes integrated along horizontal coordinates to obtain estimated heights; generating three-dimensional terrain maps. Method does not require coregistration of terrain features in pairs of images acquired from slightly different perspectives nor requires Sun or other source of illumination to be low in sky over terrain of interest. On contrary, best when Sun is high. Works at almost all combinations of illumination and viewing angles.
Hard X-Ray Flare Source Sizes Measured with the Ramaty High Energy Solar Spectroscopic Imager
NASA Technical Reports Server (NTRS)
Dennis, Brian R.; Pernak, Rick L.
2009-01-01
Ramaty High Energy Solar Spectroscopic Imager (RHESSI) observations of 18 double hard X-ray sources seen at energies above 25 keV are analyzed to determine the spatial extent of the most compact structures evident in each case. The following four image reconstruction algorithms were used: Clean, Pixon, and two routines using visibilities maximum entropy and forward fit (VFF). All have been adapted for this study to optimize their ability to provide reliable estimates of the sizes of the more compact sources. The source fluxes, sizes, and morphologies obtained with each method are cross-correlated and the similarities and disagreements are discussed. The full width at half-maximum (FWHM) of the major axes of the sources with assumed elliptical Gaussian shapes are generally well correlated between the four image reconstruction routines and vary between the RHESSI resolution limit of approximately 2" up to approximately 20" with most below 10". The FWHM of the minor axes are generally at or just above the RHESSI limit and hence should be considered as unresolved in most cases. The orientation angles of the elliptical sources are also well correlated. These results suggest that the elongated sources are generally aligned along a flare ribbon with the minor axis perpendicular to the ribbon. This is verified for the one flare in our list with coincident Transition Region and Coronal Explorer (TRACE) images. There is evidence for significant extra flux in many of the flares in addition to the two identified compact sources, thus rendering the VFF assumption of just two Gaussians inadequate. A more realistic approximation in many cases would be of two line sources with unresolved widths. Recommendations are given for optimizing the RHESSI imaging reconstruction process to ensure that the finest possible details of the source morphology become evident and that reliable estimates can be made of the source dimensions.
A dual-modal retinal imaging system with adaptive optics.
Meadway, Alexander; Girkin, Christopher A; Zhang, Yuhua
2013-12-02
An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated.
Phase sensitive optical coherence microscopy for photothermal imaging of gold nanorods
NASA Astrophysics Data System (ADS)
Hu, Yong; Podoleanu, Adrian G.; Dobre, George
2018-03-01
We describe a swept source based phase sensitive optical coherence microscopy (OCM) system for photothermal imaging of gold nanorods (GNR). The phase sensitive OCM system employed in the study has a displacement sensitivity of 0.17 nm to vibrations at single frequencies below 250 Hz. We demonstrate the generation of phase maps and confocal phase images. By displaying the difference between successive confocal phase images, we perform the confocal photothermal imaging of accumulated GNRs behind a glass coverslip and behind the scattering media separately. Compared with two-photon luminescence (TPL) detection techniques reported in literature, the technique in this study has the advantage of a simplified experimental setup and provides a more efficient method for imaging the aggregation of GNR. However, the repeatability performance of this technique suffers due to jitter noise from the swept laser source.
High energy X-ray phase and dark-field imaging using a random absorption mask.
Wang, Hongchang; Kashyap, Yogesh; Cai, Biao; Sawhney, Kawal
2016-07-28
High energy X-ray imaging has unique advantage over conventional X-ray imaging, since it enables higher penetration into materials with significantly reduced radiation damage. However, the absorption contrast in high energy region is considerably low due to the reduced X-ray absorption cross section for most materials. Even though the X-ray phase and dark-field imaging techniques can provide substantially increased contrast and complementary information, fabricating dedicated optics for high energies still remain a challenge. To address this issue, we present an alternative X-ray imaging approach to produce transmission, phase and scattering signals at high X-ray energies by using a random absorption mask. Importantly, in addition to the synchrotron radiation source, this approach has been demonstrated for practical imaging application with a laboratory-based microfocus X-ray source. This new imaging method could be potentially useful for studying thick samples or heavy materials for advanced research in materials science.
Single-exposure quantitative phase imaging in color-coded LED microscopy.
Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin
2017-04-03
We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.
Method for a detailed measurement of image intensity nonuniformity in magnetic resonance imaging.
Wang, Deming; Doddrell, David M
2005-04-01
In magnetic resonance imaging (MRI), the MR signal intensity can vary spatially and this spatial variation is usually referred to as MR intensity nonuniformity. Although the main source of intensity nonuniformity arises from B1 inhomogeneity of the coil acting as a receiver and/or transmitter, geometric distortion also alters the MR signal intensity. It is useful on some occasions to have these two different sources be separately measured and analyzed. In this paper, we present a practical method for a detailed measurement of the MR intensity nonuniformity. This method is based on the same three-dimensional geometric phantom that was recently developed for a complete measurement of the geometric distortion in MR systems. In this paper, the contribution to the intensity nonuniformity from the geometric distortion can be estimated and thus, it provides a mechanism for estimation of the intensity nonuniformity that reflects solely the spatial characteristics arising from B1. Additionally, a comprehensive scheme for characterization of the intensity nonuniformity based on the new measurement method is proposed. To demonstrate the method, the intensity nonuniformity in a 1.5 T Sonata MR system was measured and is used to illustrate the main features of the method.
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
Gallium nitride light sources for optical coherence tomography
NASA Astrophysics Data System (ADS)
Goldberg, Graham R.; Ivanov, Pavlo; Ozaki, Nobuhiko; Childs, David T. D.; Groom, Kristian M.; Kennedy, Kenneth L.; Hogg, Richard A.
2017-02-01
The advent of optical coherence tomography (OCT) has permitted high-resolution, non-invasive, in vivo imaging of the eye, skin and other biological tissue. The axial resolution is limited by source bandwidth and central wavelength. With the growing demand for short wavelength imaging, super-continuum sources and non-linear fibre-based light sources have been demonstrated in tissue imaging applications exploiting the near-UV and visible spectrum. Whilst the potential has been identified of using gallium nitride devices due to relative maturity of laser technology, there have been limited reports on using such low cost, robust devices in imaging systems. A GaN super-luminescent light emitting diode (SLED) was first reported in 2009, using tilted facets to suppress lasing, with the focus since on high power, low speckle and relatively low bandwidth applications. In this paper we discuss a method of producing a GaN based broadband source, including a passive absorber to suppress lasing. The merits of this passive absorber are then discussed with regards to broad-bandwidth applications, rather than power applications. For the first time in GaN devices, the performance of the light sources developed are assessed though the point spread function (PSF) (which describes an imaging systems response to a point source), calculated from the emission spectra. We show a sub-7μm resolution is possible without the use of special epitaxial techniques, ultimately outlining the suitability of these short wavelength, broadband, GaN devices for use in OCT applications.
Feature selection from hyperspectral imaging for guava fruit defects detection
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd. Zubir; Tan, Sou Ching
2017-06-01
Development of technology makes hyperspectral imaging commonly used for defect detection. In this research, a hyperspectral imaging system was setup in lab to target for guava fruits defect detection. Guava fruit was selected as the object as to our knowledge, there is fewer attempts were made for guava defect detection based on hyperspectral imaging. The common fluorescent light source was used to represent the uncontrolled lighting condition in lab and analysis was carried out in a specific wavelength range due to inefficiency of this particular light source. Based on the data, the reflectance intensity of this specific setup could be categorized in two groups. Sequential feature selection with linear discriminant (LD) and quadratic discriminant (QD) function were used to select features that could potentially be used in defects detection. Besides the ordinary training method, training dataset in discriminant was separated in two to cater for the uncontrolled lighting condition. These two parts were separated based on the brighter and dimmer area. Four evaluation matrixes were evaluated which are LD with common training method, QD with common training method, LD with two part training method and QD with two part training method. These evaluation matrixes were evaluated using F1-score with total 48 defected areas. Experiment shown that F1-score of linear discriminant with the compensated method hitting 0.8 score, which is the highest score among all.
Mariappan, Leo; Hu, Gang; He, Bin
2014-01-01
Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649
Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.
Ganasala, Padma; Kumar, Vinod
2016-02-01
Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.
Methods to mitigate data truncation artifacts in multi-contrast tomosynthesis image reconstructions
NASA Astrophysics Data System (ADS)
Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong
2015-03-01
Differential phase contrast imaging is a promising new image modality that utilizes the refraction rather than the absorption of x-rays to image an object. A Talbot-Lau interferometer may be used to permit differential phase contrast imaging with a conventional medical x-ray source and detector. However, the current size of the gratings fabricated for these interferometers are often relatively small. As a result, data truncation image artifacts are often observed in a tomographic acquisition and reconstruction. When data are truncated in x-ray absorption imaging, the methods have been introduced to mitigate the truncation artifacts. However, the same strategy to mitigate absorption truncation artifacts may not be appropriate for differential phase contrast or dark field tomographic imaging. In this work, several new methods to mitigate data truncation artifacts in a multi-contrast imaging system have been proposed and evaluated for tomosynthesis data acquisitions. The proposed methods were validated using experimental data acquired for a bovine udder as well as several cadaver breast specimens using a benchtop system at our facility.
Astronomy In The Cloud: Using Mapreduce For Image Coaddition
NASA Astrophysics Data System (ADS)
Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-01-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.
Astronomy in the Cloud: Using MapReduce for Image Co-Addition
NASA Astrophysics Data System (ADS)
Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.
2011-03-01
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
New Techniques for High-contrast Imaging with ADI: The ACORNS-ADI SEEDS Data Reduction Pipeline
NASA Astrophysics Data System (ADS)
Brandt, Timothy D.; McElwain, Michael W.; Turner, Edwin L.; Abe, L.; Brandner, W.; Carson, J.; Egner, S.; Feldt, M.; Golota, T.; Goto, M.; Grady, C. A.; Guyon, O.; Hashimoto, J.; Hayano, Y.; Hayashi, M.; Hayashi, S.; Henning, T.; Hodapp, K. W.; Ishii, M.; Iye, M.; Janson, M.; Kandori, R.; Knapp, G. R.; Kudo, T.; Kusakabe, N.; Kuzuhara, M.; Kwon, J.; Matsuo, T.; Miyama, S.; Morino, J.-I.; Moro-Martín, A.; Nishimura, T.; Pyo, T.-S.; Serabyn, E.; Suto, H.; Suzuki, R.; Takami, M.; Takato, N.; Terada, H.; Thalmann, C.; Tomono, D.; Watanabe, M.; Wisniewski, J. P.; Yamada, T.; Takami, H.; Usuda, T.; Tamura, M.
2013-02-01
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS survey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field of view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a Berkeley Software Distribution (BSD) license. Based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
Region-based multifocus image fusion for the precise acquisition of Pap smear images.
Tello-Mijares, Santiago; Bescós, Jesús
2018-05-01
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
NASA Astrophysics Data System (ADS)
Zang, Lixin; Zhao, Huimin; Zhang, Zhiguo; Cao, Wenwu
2017-02-01
Photodynamic therapy (PDT) is currently an advanced optical technology in medical applications. However, the application of PDT is limited by the detection of photosensitizers. This work focuses on the application of fluorescence spectroscopy and imaging in the detection of an effective photosenzitizer, hematoporphyrin monomethyl ether (HMME). Optical properties of HMME were measured and analyzed based on its absorption and fluorescence spectra. The production mechanism of its fluorescence emission was analyzed. The detection device for HMME based on fluorescence spectroscopy was designed. Ratiometric method was applied to eliminate the influence of intensity change of excitation sources, fluctuates of excitation sources and photo detectors, and background emissions. The detection limit of this device is 6 μg/L, and it was successfully applied to the diagnosis of the metabolism of HMME in the esophageal cancer cells. To overcome the limitation of the point measurement using fluorescence spectroscopy, a two-dimensional (2D) fluorescence imaging system was established. The algorithm of the 2D fluorescence imaging system is deduced according to the fluorescence ratiometric method using bandpass filters. The method of multiple pixel point addition (MPPA) was used to eliminate fluctuates of signals. Using the method of MPPA, SNR was improved by about 30 times. The detection limit of this imaging system is 1.9 μg/L. Our systems can be used in the detection of porphyrins to improve the PDT effect.
Development of a high-performance noise-reduction filter for tomographic reconstruction
NASA Astrophysics Data System (ADS)
Kao, Chien-Min; Pan, Xiaochuan
2001-07-01
We propose a new noise-reduction method for tomographic reconstruction. The method incorporates a priori information on the source image for allowing the derivation of the energy spectrum of its ideal sinogram. In combination with the energy spectrum of the Poisson noise in the measured sinogram, we are able to derive a Wiener-like filter for effective suppression of the sinogram noise. The filtered backprojection (FBP) algorithm, with a ramp filter, is then applied to the filtered sinogram to produce tomographic images. The resulting filter has a closed-form expression in the frequency space and contains a single user-adjustable regularization parameter. The proposed method is hence simple to implement and easy to use. In contrast to the ad hoc apodizing windows, such as Hanning and Butterworth filters, that are commonly used in the conventional FBP reconstruction, the proposed filter is theoretically more rigorous as it is derived by basing upon an optimization criterion, subject to a known class of source image intensity distributions.
NASA Astrophysics Data System (ADS)
Callahan, R. P.; Taylor, N. J.; Pasquet, S.; Dueker, K. G.; Riebe, C. S.; Holbrook, W. S.
2016-12-01
Geophysical imaging is rapidly becoming popular for quantifying subsurface critical zone (CZ) architecture. However, a diverse array of measurements and measurement techniques are available, raising the question of which are appropriate for specific study goals. Here we compare two techniques for measuring S-wave velocities (Vs) in the near surface. The first approach quantifies Vs in three dimensions using a passive source and an iterative residual least-squares tomographic inversion. The second approach uses a more traditional active-source seismic survey to quantify Vs in two dimensions via a Monte Carlo surface-wave dispersion inversion. Our analysis focuses on three 0.01 km2 study plots on weathered granitic bedrock in the Southern Sierra Critical Zone Observatory. Preliminary results indicate that depth-averaged velocities from the two methods agree over the scales of resolution of the techniques. While the passive- and active-source techniques both quantify Vs, each method has distinct advantages and disadvantages during data acquisition and analysis. The passive-source method has the advantage of generating a three dimensional distribution of subsurface Vs structure across a broad area. Because this method relies on the ambient seismic field as a source, which varies unpredictably across space and time, data quality and depth of investigation are outside the control of the user. Meanwhile, traditional active-source surveys can be designed around a desired depth of investigation. However, they only generate a two dimensional image of Vs structure. Whereas traditional active-source surveys can be inverted quickly on a personal computer in the field, passive source surveys require significantly more computations, and are best conducted in a high-performance computing environment. We use data from our study sites to compare these methods across different scales and to explore how these methods can be used to better understand subsurface CZ architecture.
An efficient method for the fusion of light field refocused images
NASA Astrophysics Data System (ADS)
Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei
2018-04-01
Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.
Effective Multifocus Image Fusion Based on HVS and BP Neural Network
Yang, Yong
2014-01-01
The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations. PMID:24683327
A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images
Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu
2017-01-01
The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979
Breaking the acoustic diffraction barrier with localization optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Razansky, Daniel
2018-02-01
Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.
Study on super-resolution three-dimensional range-gated imaging technology
NASA Astrophysics Data System (ADS)
Guo, Huichao; Sun, Huayan; Wang, Shuai; Fan, Youchen; Li, Yuanmiao
2018-04-01
Range-gated three dimensional imaging technology is a hotspot in recent years, because of the advantages of high spatial resolution, high range accuracy, long range, and simultaneous reflection of target reflectivity information. Based on the study of the principle of intensity-related method, this paper has carried out theoretical analysis and experimental research. The experimental system adopts the high power pulsed semiconductor laser as light source, gated ICCD as the imaging device, can realize the imaging depth and distance flexible adjustment to achieve different work mode. The imaging experiment of small imaging depth is carried out aiming at building 500m away, and 26 group images were obtained with distance step 1.5m. In this paper, the calculation method of 3D point cloud based on triangle method is analyzed, and 15m depth slice of the target 3D point cloud are obtained by using two frame images, the distance precision is better than 0.5m. The influence of signal to noise ratio, illumination uniformity and image brightness on distance accuracy are analyzed. Based on the comparison with the time-slicing method, a method for improving the linearity of point cloud is proposed.
A survey of infrared and visual image fusion methods
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Hai, Jinjin; He, Kangjian
2017-09-01
Infrared (IR) and visual (VI) image fusion is designed to fuse multiple source images into a comprehensive image to boost imaging quality and reduce redundancy information, which is widely used in various imaging equipment to improve the visual ability of human and robot. The accurate, reliable and complementary descriptions of the scene in fused images make these techniques be widely used in various fields. In recent years, a large number of fusion methods for IR and VI images have been proposed due to the ever-growing demands and the progress of image representation methods; however, there has not been published an integrated survey paper about this field in last several years. Therefore, we make a survey to report the algorithmic developments of IR and VI image fusion. In this paper, we first characterize the IR and VI image fusion based applications to represent an overview of the research status. Then we present a synthesize survey of the state of the art. Thirdly, the frequently-used image fusion quality measures are introduced. Fourthly, we perform some experiments of typical methods and make corresponding analysis. At last, we summarize the corresponding tendencies and challenges in IR and VI image fusion. This survey concludes that although various IR and VI image fusion methods have been proposed, there still exist further improvements or potential research directions in different applications of IR and VI image fusion.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
Quantum Theory of Superresolution for Incoherent Optical Imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar; Mohammadi, Mohammad
2017-05-01
A combination of Finite Difference Time Domain (FDTD) and Monte Carlo (MC) methods is proposed for simulation and analysis of ZnO microscintillators grown in polycarbonate membrane. A planar 10 keV X-ray source irradiating the detector is simulated by MC method, which provides the amount of absorbed X-ray energy in the assembly. The transport of generated UV scintillation light and its propagation in the detector was studied by the FDTD method. Detector responses to different probable scintillation sites and under different energies of X-ray source from 10 to 25 keV are reported. Finally, the tapered geometry for the scintillators is proposed, which shows enhanced spatial resolution in comparison to cylindrical geometry for imaging applications.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
NASA Astrophysics Data System (ADS)
Giongo Fernandes, Alexandre; Benjamin, Robert A.; Babler, Brian
2018-01-01
Two sets of infrared images of the Galactic Center region (|L|< 1 degree and |B| < 0.75 degrees) taken by the Spitzer Space Telescope in IRAC 3.6 micron and 4.5 micron bands are searched for high proper motion objects (> 100 mas/year). The two image sets come from GALCEN observations in 2005 and GLIMPSE proper observations in 2015 with matched observation modes. We use three different methods to search for these objects in extremely crowded fields: (1) comparing matched point source lists, (2) crowd sourcing by several college introductory astronomy classes in the state of Wisconsin (700 volunteers), and (3) convolutional neural networks trained using objects from the previous two methods. Before our search six high proper objects were known, four of which were found by the VVV near-infrared Galactic plane survey. We compare and describe our methods for this search, and present a preliminary catalog of high proper motions objects.
NASA Astrophysics Data System (ADS)
Ofner, Johannes; Eitenberger, Elisabeth; Friedbacher, Gernot; Brenner, Florian; Hutter, Herbert; Schauer, Gerhard; Kistler, Magdalena; Greilinger, Marion; Lohninger, Hans; Lendl, Bernhard; Kasper-Giebl, Anne
2017-04-01
The aerosol composition of a city like Vienna is characterized by a complex interaction of local emissions and atmospheric input on a regional and continental scale. The identification of major aerosol constituents for basic source appointment and air quality issues needs a high analytical effort. Exceptional episodic air pollution events strongly change the typical aerosol composition of a city like Vienna on a time-scale of few hours to several days. Analyzing the chemistry of particulate matter from these events is often hampered by the sampling time and related sample amount necessary to apply the full range of bulk analytical methods needed for chemical characterization. Additionally, morphological and single particle features are hardly accessible. Chemical Imaging evolved to a powerful tool for image-based chemical analysis of complex samples. As a complementary technique to bulk analytical methods, chemical imaging can address a new access to study air pollution events by obtaining major aerosol constituents with single particle features at high temporal resolutions and small sample volumes. The analysis of the chemical imaging datasets is assisted by multivariate statistics with the benefit of image-based chemical structure determination for direct aerosol source appointment. A novel approach in chemical imaging is combined chemical imaging or so-called multisensor hyperspectral imaging, involving elemental imaging (electron microscopy-based energy dispersive X-ray imaging), vibrational imaging (Raman micro-spectroscopy) and mass spectrometric imaging (Time-of-Flight Secondary Ion Mass Spectrometry) with subsequent combined multivariate analytics. Combined chemical imaging of precipitated aerosol particles will be demonstrated by the following examples of air pollution events in Vienna: Exceptional episodic events like the transformation of Saharan dust by the impact of the city of Vienna will be discussed and compared to samples obtained at a high alpine background site (Sonnblick Observatory, Saharan Dust Event from April 2016). Further, chemical imaging of biological aerosol constituents of an autumnal pollen breakout in Vienna, with background samples from nearby locations from November 2016 will demonstrate the advantages of the chemical imaging approach. Additionally, the chemical fingerprint of an exceptional air pollution event from a local emission source, caused by the pull down process of a building in Vienna will unravel the needs for multisensor imaging, especially the combinational access. Obtained chemical images will be correlated to bulk analytical results. Benefits of the overall methodical access by combining bulk analytics and combined chemical imaging of exceptional episodic air pollution events will be discussed.
Surface Imaging Skin Friction Instrument and Method
NASA Technical Reports Server (NTRS)
Brown, James L. (Inventor); Naughton, Jonathan W. (Inventor)
1999-01-01
A surface imaging skin friction instrument allowing 2D resolution of spatial image by a 2D Hilbert transform and 2D inverse thin-oil film solver, providing an innovation over prior art single point approaches. Incoherent, monochromatic light source can be used. The invention provides accurate, easy to use, economical measurement of larger regions of surface shear stress in a single test.
ERIC Educational Resources Information Center
Westbrook, R. Niccole; Watkins, Sean
2012-01-01
As primary source materials in the library are digitized and made available online, the focus of related library services is shifting to include new and innovative methods of digital delivery via social media, digital storytelling, and community-based and consortial image repositories. Most images on the Web are not of sufficient quality for most…
NASA Astrophysics Data System (ADS)
Wang, Xiao; Gao, Feng; Dong, Junyu; Qi, Qiang
2018-04-01
Synthetic aperture radar (SAR) image is independent on atmospheric conditions, and it is the ideal image source for change detection. Existing methods directly analysis all the regions in the speckle noise contaminated difference image. The performance of these methods is easily affected by small noisy regions. In this paper, we proposed a novel change detection framework for saliency-guided change detection based on pattern and intensity distinctiveness analysis. The saliency analysis step can remove small noisy regions, and therefore makes the proposed method more robust to the speckle noise. In the proposed method, the log-ratio operator is first utilized to obtain a difference image (DI). Then, the saliency detection method based on pattern and intensity distinctiveness analysis is utilized to obtain the changed region candidates. Finally, principal component analysis and k-means clustering are employed to analysis pixels in the changed region candidates. Thus, the final change map can be obtained by classifying these pixels into changed or unchanged class. The experiment results on two real SAR images datasets have demonstrated the effectiveness of the proposed method.
Toward knowledge-enhanced viewing using encyclopedias and model-based segmentation
NASA Astrophysics Data System (ADS)
Kneser, Reinhard; Lehmann, Helko; Geller, Dieter; Qian, Yue-Chen; Weese, Jürgen
2009-02-01
To make accurate decisions based on imaging data, radiologists must associate the viewed imaging data with the corresponding anatomical structures. Furthermore, given a disease hypothesis possible image findings which verify the hypothesis must be considered and where and how they are expressed in the viewed images. If rare anatomical variants, rare pathologies, unfamiliar protocols, or ambiguous findings are present, external knowledge sources such as medical encyclopedias are consulted. These sources are accessed using keywords typically describing anatomical structures, image findings, pathologies. In this paper we present our vision of how a patient's imaging data can be automatically enhanced with anatomical knowledge as well as knowledge about image findings. On one hand, we propose the automatic annotation of the images with labels from a standard anatomical ontology. These labels are used as keywords for a medical encyclopedia such as STATdx to access anatomical descriptions, information about pathologies and image findings. On the other hand we envision encyclopedias to contain links to region- and finding-specific image processing algorithms. Then a finding is evaluated on an image by applying the respective algorithm in the associated anatomical region. Towards realization of our vision, we present our method and results of automatic annotation of anatomical structures in 3D MRI brain images. Thereby we develop a complex surface mesh model incorporating major structures of the brain and a model-based segmentation method. We demonstrate the validity by analyzing the results of several training and segmentation experiments with clinical data focusing particularly on the visual pathway.
Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections
NASA Astrophysics Data System (ADS)
Ziock, K. P.; Boehnen, C. B.; Ernst, J. M.; Fabris, L.; Hayward, J. P.; Karnowski, T. P.; Paquit, V. C.; Patlolla, D. R.; Trombino, D. G.
2016-01-01
Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combined gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. The complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.
Motion correction for passive radiation imaging of small vessels in ship-to-ship inspections
Ziock, Klaus -Peter; Boehnen, Chris Bensing; Ernst, Joseph M.; ...
2015-09-05
Passive radiation detection remains one of the most acceptable means of ascertaining the presence of illicit nuclear materials. In maritime applications it is most effective against small to moderately sized vessels, where attenuation in the target vessel is of less concern. Unfortunately, imaging methods that can remove source confusion, localize a source, and avoid other systematic detection issues cannot be easily applied in ship-to-ship inspections because relative motion of the vessels blurs the results over many pixels, significantly reducing system sensitivity. This is particularly true for the smaller watercraft, where passive inspections are most valuable. We have developed a combinedmore » gamma-ray, stereo visible-light imaging system that addresses this problem. Data from the stereo imager are used to track the relative location and orientation of the target vessel in the field of view of a coded-aperture gamma-ray imager. Using this information, short-exposure gamma-ray images are projected onto the target vessel using simple tomographic back-projection techniques, revealing the location of any sources within the target. Here,the complex autonomous tracking and image reconstruction system runs in real time on a 48-core workstation that deploys with the system.« less
Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.
Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban
2015-07-20
In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.
NASA Astrophysics Data System (ADS)
Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.
2015-12-01
Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.
Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring
NASA Astrophysics Data System (ADS)
Yin, J.; Ng, R.; Nakata, N.
2017-12-01
Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borm, B.; Gärtner, F.; Khaghani, D.
2016-09-15
We demonstrate that stacking several imaging plates (IPs) constitutes an easy method to increase hard x-ray detection efficiency. Used to record x-ray radiographic images produced by an intense-laser driven hard x-ray backlighter source, the IP stacks resulted in a significant improvement of the radiograph density resolution. We attribute this to the higher quantum efficiency of the combined detectors, leading to a reduced photon noise. Electron-photon transport simulations of the interaction processes in the detector reproduce the observed contrast improvement. Increasing the detection efficiency to enhance radiographic imaging capabilities is equally effective as increasing the x-ray source yield, e.g., by amore » larger drive laser energy.« less
Digital camera auto white balance based on color temperature estimation clustering
NASA Astrophysics Data System (ADS)
Zhang, Lei; Liu, Peng; Liu, Yuling; Yu, Feihong
2010-11-01
Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability to recognize the original color of an object in a scene illuminated by a light source that has a different color temperature from D65-the standard sun light. However, recorded images or video clips, can only record the original information incident into the sensor. Therefore, those recorded will appear different from the real scene observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an AWB method based on color temperature estimation clustering is presented and discussed. First, the method gives a list of several lighting conditions that are common for daily life, which are represented by their color temperatures, and thresholds for each color temperature to determine whether a light source is this kind of illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically). For each block, the gray world assumption method is used to calculate the color cast, which can be used to estimate the color temperature of that block. Third, each calculated color temperature are compared with the color temperatures in the given illumination list. If the color temperature of a block is not within any of the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority selection, the color temperature having the most blocks are considered as the color temperature of the light source. Experimental results show that the proposed method works well for most commonly used light sources. The color casts are removed and the final images look natural.
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
NASA Astrophysics Data System (ADS)
Zhang, Yanjun; Jiang, Li; Wang, Chunru
2015-07-01
A porous Sn@C nanocomposite was prepared via a facile hydrothermal method combined with a simple post-calcination process, using stannous octoate as the Sn source and glucose as the C source. The as-prepared Sn@C nanocomposite exhibited excellent electrochemical behavior with a high reversible capacity, long cycle life and good rate capability when used as an anode material for lithium ion batteries.A porous Sn@C nanocomposite was prepared via a facile hydrothermal method combined with a simple post-calcination process, using stannous octoate as the Sn source and glucose as the C source. The as-prepared Sn@C nanocomposite exhibited excellent electrochemical behavior with a high reversible capacity, long cycle life and good rate capability when used as an anode material for lithium ion batteries. Electronic supplementary information (ESI) available: Detailed experimental procedure and additional characterization, including a Raman spectrum, TGA curve, N2 adsorption-desorption isotherm, TEM images and SEM images. See DOI: 10.1039/c5nr03093e
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Method and apparatus for acoustic imaging of objects in water
Deason, Vance A.; Telschow, Kenneth L.
2005-01-25
A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.
Radiometric analysis of photographic data by the effective exposure method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Constantine, B J
1972-04-01
The effective exposure method provides for radiometric analysis of photographic data. A three-dimensional model, where density is a function of energy and wavelength, is postulated to represent the film response function. Calibration exposures serve to eliminate the other factors which affect image density. The effective exposure causing an image can be determined by comparing the image density with that of a calibration exposure. If the relative spectral distribution of the source is known, irradiance and/or radiance can be unfolded from the effective exposure expression.
Method and apparatus for imaging a sample on a device
Trulson, Mark; Stern, David; Fiekowsky, Peter; Rava, Richard; Walton, Ian; Fodor, Stephen P. A.
2001-01-01
A method and apparatus for imaging a sample are provided. An electromagnetic radiation source generates excitation radiation which is sized by excitation optics to a line. The line is directed at a sample resting on a support and excites a plurality of regions on the sample. Collection optics collect response radiation reflected from the sample I and image the reflected radiation. A detector senses the reflected radiation and is positioned to permit discrimination between radiation reflected from a certain focal plane in the sample and certain other planes within the sample.
An object-oriented simulator for 3D digital breast tomosynthesis imaging system.
Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.
An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System
Cengiz, Kubra
2013-01-01
Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
The Chandra Source Catalog: X-ray Aperture Photometry
NASA Astrophysics Data System (ADS)
Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.
Time-efficient high-resolution whole-brain three-dimensional macromolecular proton fraction mapping
Yarnykh, Vasily L.
2015-01-01
Purpose Macromolecular proton fraction (MPF) mapping is a quantitative MRI method that reconstructs parametric maps of a relative amount of macromolecular protons causing the magnetization transfer (MT) effect and provides a biomarker of myelination in neural tissues. This study aimed to develop a high-resolution whole-brain MPF mapping technique utilizing a minimal possible number of source images for scan time reduction. Methods The described technique is based on replacement of an actually acquired reference image without MT saturation by a synthetic one reconstructed from R1 and proton density maps, thus requiring only three source images. This approach enabled whole-brain three-dimensional MPF mapping with isotropic 1.25×1.25×1.25 mm3 voxel size and scan time of 20 minutes. The synthetic reference method was validated against standard MPF mapping with acquired reference images based on data from 8 healthy subjects. Results Mean MPF values in segmented white and gray matter appeared in close agreement with no significant bias and small within-subject coefficients of variation (<2%). High-resolution MPF maps demonstrated sharp white-gray matter contrast and clear visualization of anatomical details including gray matter structures with high iron content. Conclusions Synthetic reference method improves resolution of MPF mapping and combines accurate MPF measurements with unique neuroanatomical contrast features. PMID:26102097
Multi-modal molecular diffuse optical tomography system for small animal imaging
Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid
2013-01-01
A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977
Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Fast method of cross-talk effect reduction in biomedical imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Nowakowski, Maciej; Kolenderska, Sylwia M.; Borycki, Dawid; Wojtkowski, Maciej
2016-03-01
Optical imaging of biological samples or living tissue structures requires light delivery to a region of interest and then collection of scattered light or fluorescent light in order to reconstruct an image of the object. When the coherent illumination light enters bulky biological object, each of scattering center (single molecule, group of molecules or other sample feature) acts as a secondary light source. As a result, scattered spherical waves from these secondary sources interact with each other, generating cross-talk noise between optical channels (eigenmodes). The cross-talk effect have serious impact on the performance of the imaging systems. In particular it reduces an ability of optical system to transfer high spatial frequencies thereby reducing its resolution. In this work we present a fast method to eliminate all unwanted waves combination, that overlap at image plane, suppressing recovery of high spatial frequencies by using the spatio-temporal optical coherence manipulation (STOC, [1]). In this method a number of phase mask is introduced to illuminating beam by spatial light modulator in a time of single image acquisition. We use a digital mirror device (DMD) in order to rapid cross-talk noise reduction (up to 22kHz modulation frequency) when imaging living biological cells in vivo by using full-field microscopy setup with double pass arrangement. This, to our best knowledge, has never been shown before. [1] D. Borycki, M. Nowakowski, and M. Wojtkowski, Opt. Lett. 38, 4817 (2013).
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Willse, Alan R.
The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.
Phase retrieval by coherent modulation imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Phase retrieval by coherent modulation imaging
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Full-field optical coherence tomography image restoration based on Hilbert transformation
NASA Astrophysics Data System (ADS)
Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha
2007-02-01
We propose the envelope detection method that is based on Hilbert transform for image restoration in full-filed optical coherence tomography (FF-OCT). The FF-OCT system presenting a high-axial resolution of 0.9 μm was implemented with a Kohler illuminator based on Linnik interferometer configuration. A 250 W customized quartz tungsten halogen lamp was used as a broadband light source and a CCD camera was used as a 2-dimentional detector array. The proposed image restoration method for FF-OCT requires only single phase-shifting. By using both the original and the phase-shifted images, we could remove the offset and the background signals from the interference fringe images. The desired coherent envelope image was obtained by applying Hilbert transform. With the proposed image restoration method, we demonstrate en-face imaging performance of the implemented FF-OCT system by presenting a tilted mirror surface, an integrated circuit chip, and a piece of onion epithelium.
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
A NEW RESULT ON THE ORIGIN OF THE EXTRAGALACTIC GAMMA-RAY BACKGROUND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Ming; Wang Jiancheng, E-mail: mzhou@ynao.ac.cn
2013-06-01
In this paper, we repeatedly use the method of image stacking to study the origin of the extragalactic gamma-ray background (EGB) at GeV bands, and find that the Faint Images of the Radio Sky at Twenty centimeters (FIRST) sources undetected by the Large Area Telescope on the Fermi Gamma-ray Space Telescope can contribute about (56 {+-} 6)% of the EGB. Because FIRST is a flux-limited sample of radio sources with incompleteness at the faint limit, we consider that point sources, including blazars, non-blazar active galactic nuclei, and starburst galaxies, could produce a much larger fraction of the EGB.
Spectrally optimal illuminations for diabetic retinopathy detection in retinal imaging
NASA Astrophysics Data System (ADS)
Bartczak, Piotr; Fält, Pauli; Penttinen, Niko; Ylitepsa, Pasi; Laaksonen, Lauri; Lensu, Lasse; Hauta-Kasari, Markku; Uusitalo, Hannu
2017-04-01
Retinal photography is a standard method for recording retinal diseases for subsequent analysis and diagnosis. However, the currently used white light or red-free retinal imaging does not necessarily provide the best possible visibility of different types of retinal lesions, important when developing diagnostic tools for handheld devices, such as smartphones. Using specifically designed illumination, the visibility and contrast of retinal lesions could be improved. In this study, spectrally optimal illuminations for diabetic retinopathy lesion visualization are implemented using a spectrally tunable light source based on digital micromirror device. The applicability of this method was tested in vivo by taking retinal monochrome images from the eyes of five diabetic volunteers and two non-diabetic control subjects. For comparison to existing methods, we evaluated the contrast of retinal images taken with our method and red-free illumination. The preliminary results show that the use of optimal illuminations improved the contrast of diabetic lesions in retinal images by 30-70%, compared to the traditional red-free illumination imaging.
Antonica, Filippo; Asabella, Artor Niccoli; Ferrari, Cristina; Rubini, Domenico; Notaristefano, Antonio; Nicoletti, Adriano; Altini, Corinna; Merenda, Nunzio; Mossa, Emilio; Guarini, Attilio; Rubini, Giuseppe
2014-01-01
In the last decade numerous attempts were considered to co-register and integrate different imaging data. Like PET/CT the integration of PET to MR showed great interest. PET/MR scanners are recently tested on different distrectual or systemic pathologies. Unfortunately PET/MR scanners are expensive and diagnostic protocols are still under studies and investigations. Nuclear Medicine imaging highlights functional and biometabolic information but has poor anatomic details. The aim of this study is to integrate MR and PET data to produce distrectual or whole body fused images acquired from different scanners even in different days. We propose an offline method to fuse PET with MR data using an open-source software that has to be inexpensive, reproducible and capable to exchange data over the network. We also evaluate global quality, alignment quality, and diagnostic confidence of fused PET-MR images. We selected PET/CT studies performed in our Nuclear Medicine unit, MR studies provided by patients on DICOM CD media or network received. We used Osirix 5.7 open source version. We aligned CT slices with the first MR slice, pointed and marked for co-registration using MR-T1 sequence and CT as reference and fused with PET to produce a PET-MR image. A total of 100 PET/CT studies were fused with the following MR studies: 20 head, 15 thorax, 24 abdomen, 31 pelvis, 10 whole body. An interval of no more than 15 days between PET and MR was the inclusion criteria. PET/CT, MR and fused studies were evaluated by two experienced radiologist and two experienced nuclear medicine physicians. Each one filled a five point based evaluation scoring scheme based on image quality, image artifacts, segmentation errors, fusion misalignment and diagnostic confidence. Our fusion method showed best results for head, thorax and pelvic districts in terms of global quality, alignment quality and diagnostic confidence,while for the abdomen and pelvis alignement quality and global quality resulted poor due to internal organs filling variation and time shifting beetwen examinations. PET/CT images with time of flight reconstruction and real attenuation correction were combined with anatomical detailed MRI images. We used Osirix, an image processing Open Source Software dedicated to DICOM images. No additional costs, to buy and upgrade proprietary software are required for combining data. No high technology or very expensive PET/MR scanner, that requires dedicated shielded room spaces and personnel to be employed or to be trained, are needed. Our method allows to share patient PET/MR fused data with different medical staff using dedicated networks. The proposed method may be applied to every MR sequence (MR-DWI and MR-STIR, magnet enhanced sequences) to characterize soft tissue alterations and improve discrimination diseases. It can be applied not only to PET with MR but virtually to every DICOM study.
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
[Application of Fourier transform profilometry in 3D-surface reconstruction].
Shi, Bi'er; Lu, Kuan; Wang, Yingting; Li, Zhen'an; Bai, Jing
2011-08-01
With the improvement of system frame and reconstruction methods in fluorescent molecules tomography (FMT), the FMT technology has been widely used as an important experimental tool in biomedical research. It is necessary to get the 3D-surface profile of the experimental object as the boundary constraints of FMT reconstruction algorithms. We proposed a new 3D-surface reconstruction method based on Fourier transform profilometry (FTP) method under the blue-purple light condition. The slice images were reconstructed using proper image processing methods, frequency spectrum analysis and filtering. The results of experiment showed that the method properly reconstructed the 3D-surface of objects and has the mm-level accuracy. Compared to other methods, this one is simple and fast. Besides its well-reconstructed, the proposed method could help monitor the behavior of the object during the experiment to ensure the correspondence of the imaging process. Furthermore, the method chooses blue-purple light section as its light source to avoid the interference towards fluorescence imaging.
Finding False Positives Planet Candidates Due To Background Eclipsing Binaries in K2
NASA Astrophysics Data System (ADS)
Mullally, Fergal; Thompson, Susan E.; Coughlin, Jeffrey; DAVE Team
2016-06-01
We adapt the difference image centroid approach, used for finding background eclipsing binaries, to vet K2 planet candidates. Difference image centroids were used with great success to vet planet candidates in the original Kepler mission, where the source of a transit could be identified by subtracting images of out-of-transit cadences from in-transit cadences. To account for K2's roll pattern, we reconstruct out-of-transit images from cadences that are nearby in both time and spacecraft roll angle. We describe the method and discuss some K2 planet candidates which this method suggests are false positives.
Image-based tracking of the suturing needle during laparoscopic interventions
NASA Astrophysics Data System (ADS)
Speidel, S.; Kroehnert, A.; Bodenstedt, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.
2015-03-01
One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.
Constraints as a destriping tool for Hires images
NASA Technical Reports Server (NTRS)
Cao, YU; Prince, Thomas A.
1994-01-01
Images produced from the Maximum Correlation Method sometimes suffer from visible striping artifacts, especially for areas of extended sources. Possible causes are different baseline levels and calibration errors in the detectors. We incorporated these factors into the MCM algorithm, and tested the effects of different constraints on the output image. The result shows significant visual improvement over the standard MCM Method. In some areas the new images show intelligible structures that are otherwise corrupted by striping artifacts, and the removal of these artifacts could enhance performance of object classification algorithms. The constraints were also tested on low surface brightness areas, and were found to be effective in reducing the noise level.
Transmission imaging for integrated PET-MR systems.
Bowen, Spencer L; Fuin, Niccolò; Levine, Michael A; Catana, Ciprian
2016-08-07
Attenuation correction for PET-MR systems continues to be a challenging problem, particularly for body regions outside the head. The simultaneous acquisition of transmission scan based μ-maps and MR images on integrated PET-MR systems may significantly increase the performance of and offer validation for new MR-based μ-map algorithms. For the Biograph mMR (Siemens Healthcare), however, use of conventional transmission schemes is not practical as the patient table and relatively small diameter scanner bore significantly restrict radioactive source motion and limit source placement. We propose a method for emission-free coincidence transmission imaging on the Biograph mMR. The intended application is not for routine subject imaging, but rather to improve and validate MR-based μ-map algorithms; particularly for patient implant and scanner hardware attenuation correction. In this study we optimized source geometry and assessed the method's performance with Monte Carlo simulations and phantom scans. We utilized a Bayesian reconstruction algorithm, which directly generates μ-map estimates from multiple bed positions, combined with a robust scatter correction method. For simulations with a pelvis phantom a single torus produced peak noise equivalent count rates (34.8 kcps) dramatically larger than a full axial length ring (11.32 kcps) and conventional rotating source configurations. Bias in reconstructed μ-maps for head and pelvis simulations was ⩽4% for soft tissue and ⩽11% for bone ROIs. An implementation of the single torus source was filled with (18)F-fluorodeoxyglucose and the proposed method quantified for several test cases alone or in comparison with CT-derived μ-maps. A volume average of 0.095 cm(-1) was recorded for an experimental uniform cylinder phantom scan, while a bias of <2% was measured for the cortical bone equivalent insert of the multi-compartment phantom. Single torus μ-maps of a hip implant phantom showed significantly less artifacts and improved dynamic range, and differed greatly for highly attenuating materials in the case of the patient table, compared to CT results. Use of a fixed torus geometry, in combination with translation of the patient table to perform complete tomographic sampling, generated highly quantitative measured μ-maps and is expected to produce images with significantly higher SNR than competing fixed geometries at matched total acquisition time.
Lagrange constraint neural networks for massive pixel parallel image demixing
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the result is indeed as expected.
Campana, R.; Bernieri, E.; Massaro, E.; ...
2013-05-22
We present that the minimal spanning tree (MST) algorithm is a graph-theoretical cluster-finding method. We previously applied it to γ-ray bidimensional images, showing that it is quite sensitive in finding faint sources. Possible sources are associated with the regions where the photon arrival directions clusterize. MST selects clusters starting from a particular “tree” connecting all the point of the image and performing a cut based on the angular distance between photons, with a number of events higher than a given threshold. In this paper, we show how a further filtering, based on some parameters linked to the cluster properties, canmore » be applied to reduce spurious detections. We find that the most efficient parameter for this secondary selection is the magnitudeM of a cluster, defined as the product of its number of events by its clustering degree. We test the sensitivity of the method by means of simulated and real Fermi-Large Area Telescope (LAT) fields. Our results show that √M is strongly correlated with other statistical significance parameters, derived from a wavelet based algorithm and maximum likelihood (ML) analysis, and that it can be used as a good estimator of statistical significance of MST detections. Finally, we apply the method to a 2-year LAT image at energies higher than 3 GeV, and we show the presence of new clusters, likely associated with BL Lac objects.« less
Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set
NASA Astrophysics Data System (ADS)
Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice
A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.
Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A; Vandenberghe, Stefaan
2014-02-01
Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. In conclusion, a transmission-based technique with an annulus-shaped transmission source will be more accurate than a conventional MR-based technique for measuring attenuation coefficients at 511 keV in future whole-body PET/MR studies.
Multi-focus image fusion algorithm using NSCT and MPCNN
NASA Astrophysics Data System (ADS)
Liu, Kang; Wang, Lianli
2018-04-01
Based on nonsubsampled contourlet transform (NSCT) and modified pulse coupled neural network (MPCNN), the paper proposes an effective method of image fusion. Firstly, the paper decomposes the source image into the low-frequency components and high-frequency components using NSCT, and then processes the low-frequency components by regional statistical fusion rules. For high-frequency components, the paper calculates the spatial frequency (SF), which is input into MPCNN model to get relevant coefficients according to the fire-mapping image of MPCNN. At last, the paper restructures the final image by inverse transformation of low-frequency and high-frequency components. Compared with the wavelet transformation (WT) and the traditional NSCT algorithm, experimental results indicate that the method proposed in this paper achieves an improvement both in human visual perception and objective evaluation. It indicates that the method is effective, practical and good performance.
Multi-Focus Image Fusion Based on NSCT and NSST
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen
2015-12-01
In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.
Visual Method for Detecting Contaminant on Dried Nutmeg Using Fluorescence Imaging
NASA Astrophysics Data System (ADS)
Dahlan, S. A.; Ahmad, U.; Subrata, I. D. M.
2018-05-01
Traditional practice of nutmeg sun-drying causes some fungi such as Aspergillus flavus to grow. One of the secondary metabolites of A. flavus named aflatoxin (AFs) is known to be carcinogenic, so the dried nutmeg kernel must be aflatoxin-free in the trading. Aflatoxin detection requires time and costly, make it difficult to conduct at the farmers level. This study aims to develop a simple and low-cost method to detect aflatoxin at the farmer level. Fresh nutmeg seeds were dried in two ways; sundried everyday (continuous), and sundried every two days (intermittent), both for around 18 days. The dried nutmeg seeds are then stored in a rice sack under normal conditions until the fungi grow, then they were opened and the images of kernels were captured using a CCD camera, with normal light and UV light sources. Visual observation on images captured in normal light source was able to detect the presence of fungi on dried kernels, by 28.0% for continuous and 26.2% for intermittent sun-drying. Visual observation on images captured in UV light source was able to detect the presence of aflatoxin on dried kernels, indicated by blue luminance on kernel, by 10.4% and 13.4% for continuous and intermittent sun-drying.
Unruh, Kathryn E.; Sasson, Noah J.; Shafer, Robin L.; Whitten, Allison; Miller, Stephanie J.; Turner-Brown, Lauren; Bodfish, James W.
2016-01-01
Background: Our experiences with the world play a critical role in neural and behavioral development. Children with autism spectrum disorder (ASD) spend a disproportionate amount of time seeking out, attending to, and engaging with aspects of their environment that are largely nonsocial in nature. In this study we adapted an established method for eliciting and quantifying aspects of visual choice behavior related to preference to test the hypothesis that preference for nonsocial sources of stimulation diminishes orientation and attention to social sources of stimulation in children with ASD. Method: Preferential viewing tasks can serve as objective measures of preference, with a greater proportion of viewing time to one item indicative of increased preference. The current task used gaze-tracking technology to examine patterns of visual orientation and attention to stimulus pairs that varied in social (faces) and nonsocial content (high autism interest or low autism interest). Participants included both adolescents diagnosed with ASD and typically developing; groups were matched on IQ and gender. Results: Repeated measures ANOVA revealed that individuals with ASD had a significantly greater latency to first fixate on social images when this image was paired with a high autism interest image, compared to a low autism interest image pairing. Participants with ASD showed greater total look time to objects, while typically developing participants preferred to look at faces. Groups also differed in number and average duration of fixations to social and object images. In the ASD group only, a measure of nonsocial interest was associated with reduced preference for social images when paired with high autism interest images. Conclusions: In ASD, the presence of nonsocial sources of stimulation can significantly increase the latency of look time to social sources of information. These results suggest that atypicalities in social motivation in ASD may be context-dependent, with a greater degree of plasticity than is assumed by existing social motivation accounts of ASD. PMID:28066169
NASA Astrophysics Data System (ADS)
Chtcheprov, Pavel; Inscoe, Christina; Burk, Laurel; Ger, Rachel; Yuan, Hong; Lu, Jianping; Chang, Sha; Zhou, Otto
2014-03-01
Microbeam radiation therapy (MRT) uses an array of high-dose, narrow (~100 μm) beams separated by a fraction of a millimeter to treat various radio-resistant, deep-seated tumors. MRT has been shown to spare normal tissue up to 1000 Gy of entrance dose while still being highly tumoricidal. Current methods of tumor localization for our MRT treatments require MRI and X-ray imaging with subject motion and image registration that contribute to the measurement error. The purpose of this study is to develop a novel form of imaging to quickly and accurately assist in high resolution target positioning for MRT treatments using X-ray fluorescence (XRF). The key to this method is using the microbeam to both treat and image. High Z contrast media is injected into the phantom or blood pool of the subject prior to imaging. Using a collimated spectrum analyzer, the region of interest is scanned through the MRT beam and the fluorescence signal is recorded for each slice. The signal can be processed to show vascular differences in the tissue and isolate tumor regions. Using the radiation therapy source as the imaging source, repositioning and registration errors are eliminated. A phantom study showed that a spatial resolution of a fraction of microbeam width can be achieved by precision translation of the mouse stage. Preliminary results from an animal study showed accurate iodine profusion, confirmed by CT. The proposed image guidance method, using XRF to locate and ablate tumors, can be used as a fast and accurate MRT treatment planning system.
NASA Astrophysics Data System (ADS)
Jany, B. R.; Janas, A.; Krok, F.
2017-11-01
The quantitative composition of metal alloy nanowires on InSb(001) semiconductor surface and gold nanostructures on germanium surface is determined by blind source separation (BSS) machine learning (ML) method using non negative matrix factorization (NMF) from energy dispersive X-ray spectroscopy (EDX) spectrum image maps measured in a scanning electron microscope (SEM). The BSS method blindly decomposes the collected EDX spectrum image into three source components, which correspond directly to the X-ray signals coming from the supported metal nanostructures, bulk semiconductor signal and carbon background. The recovered quantitative composition is validated by detailed Monte Carlo simulations and is confirmed by separate cross-sectional TEM EDX measurements of the nanostructures. This shows that SEM EDX measurements together with machine learning blind source separation processing could be successfully used for the nanostructures quantitative chemical composition determination.
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.
2016-12-01
Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
NASA Astrophysics Data System (ADS)
Panin, V. Y.; Aykac, M.; Casey, M. E.
2013-06-01
The simultaneous PET data reconstruction of emission activity and attenuation coefficient distribution is presented, where the attenuation image is constrained by exploiting an external transmission source. Data are acquired in time-of-flight (TOF) mode, allowing in principle for separation of emission and transmission data. Nevertheless, here all data are reconstructed at once, eliminating the need to trace the position of the transmission source in sinogram space. Contamination of emission data by the transmission source and vice versa is naturally modeled. Attenuated emission activity data also provide additional information about object attenuation coefficient values. The algorithm alternates between attenuation and emission activity image updates. We also proposed a method of estimation of spatial scatter distribution from the transmission source by incorporating knowledge about the expected range of attenuation map values. The reconstruction of experimental data from the Siemens mCT scanner suggests that simultaneous reconstruction improves attenuation map image quality, as compared to when data are separated. In the presented example, the attenuation map image noise was reduced and non-uniformity artifacts that occurred due to scatter estimation were suppressed. On the other hand, the use of transmission data stabilizes attenuation coefficient distribution reconstruction from TOF emission data alone. The example of improving emission images by refining a CT-based patient attenuation map is presented, revealing potential benefits of simultaneous CT and PET data reconstruction.
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
NASA Astrophysics Data System (ADS)
Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao
2018-01-01
In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.
Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.
Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks
Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik
2009-01-01
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569
Research on simulated infrared image utility evaluation using deep representation
NASA Astrophysics Data System (ADS)
Zhang, Ruiheng; Mu, Chengpo; Yang, Yu; Xu, Lixin
2018-01-01
Infrared (IR) image simulation is an important data source for various target recognition systems. However, whether simulated IR images could be used as training data for classifiers depends on the features of fidelity and authenticity of simulated IR images. For evaluation of IR image features, a deep-representation-based algorithm is proposed. Being different from conventional methods, which usually adopt a priori knowledge or manually designed feature, the proposed method can extract essential features and quantitatively evaluate the utility of simulated IR images. First, for data preparation, we employ our IR image simulation system to generate large amounts of IR images. Then, we present the evaluation model of simulated IR image, for which an end-to-end IR feature extraction and target detection model based on deep convolutional neural network is designed. At last, the experiments illustrate that our proposed method outperforms other verification algorithms in evaluating simulated IR images. Cross-validation, variable proportion mixed data validation, and simulation process contrast experiments are carried out to evaluate the utility and objectivity of the images generated by our simulation system. The optimum mixing ratio between simulated and real data is 0.2≤γ≤0.3, which is an effective data augmentation method for real IR images.
Joint image registration and fusion method with a gradient strength regularization
NASA Astrophysics Data System (ADS)
Lidong, Huang; Wei, Zhao; Jun, Wang
2015-05-01
Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-01-01
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. PMID:28505137
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation.
Zuo, Yujia; Liu, Jinghong; Bai, Guanbing; Wang, Xuan; Sun, Mingchao
2017-05-15
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems.
Imaging of human finger nail-fold with MHz A-scan rate swept source optical coherence tomography
NASA Astrophysics Data System (ADS)
Poddar, Raju; Mondal, Indranil
2018-07-01
We present a non-invasive three-dimensional depth-resolved micro-structure and micro-vasculature imaging of a human fingernail-fold with a swept-source optical coherence tomography (ssOCT) system at a 1064 nm center wavelength. A phase variance OCT angiography (OCTA) method was implemented for motion contrast OCT imaging. A Fourier-domain mode-locked light source with an A-scan rate of 1.7 MHz (1 700 000 A-scans s‑1) was utilized for imaging. The experimental setup demonstrates OCT and OCTA imaging with an area of ~5 mm × 5 mm (within the Nyquist limit). Details of the ssOCTA system such as system parameters, scanning protocols, acquisition time, challenges, and scanning density are discussed. The selected features of the nail-fold structure and vascular networks are also deliberated. The system has potential for real-time monitoring of transdermal drug delivery, and the management and diagnosis of various diseases such as connective tissue diseases and Raynaud’s phenomenon.