Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol
2018-04-12
To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.
Pisano, E D; Cole, E B; Major, S; Zong, S; Hemminger, B M; Muller, K E; Johnston, R E; Walsh, R; Conant, E; Fajardo, L L; Feig, S A; Nishikawa, R M; Yaffe, M J; Williams, M B; Aylward, S R
2000-09-01
To determine the preferences of radiologists among eight different image processing algorithms applied to digital mammograms obtained for screening and diagnostic imaging tasks. Twenty-eight images representing histologically proved masses or calcifications were obtained by using three clinically available digital mammographic units. Images were processed and printed on film by using manual intensity windowing, histogram-based intensity windowing, mixture model intensity windowing, peripheral equalization, multiscale image contrast amplification (MUSICA), contrast-limited adaptive histogram equalization, Trex processing, and unsharp masking. Twelve radiologists compared the processed digital images with screen-film mammograms obtained in the same patient for breast cancer screening and breast lesion diagnosis. For the screening task, screen-film mammograms were preferred to all digital presentations, but the acceptability of images processed with Trex and MUSICA algorithms were not significantly different. All printed digital images were preferred to screen-film radiographs in the diagnosis of masses; mammograms processed with unsharp masking were significantly preferred. For the diagnosis of calcifications, no processed digital mammogram was preferred to screen-film mammograms. When digital mammograms were preferred to screen-film mammograms, radiologists selected different digital processing algorithms for each of three mammographic reading tasks and for different lesion types. Soft-copy display will eventually allow radiologists to select among these options more easily.
High dynamic range algorithm based on HSI color space
NASA Astrophysics Data System (ADS)
Zhang, Jiancheng; Liu, Xiaohua; Dong, Liquan; Zhao, Yuejin; Liu, Ming
2014-10-01
This paper presents a High Dynamic Range algorithm based on HSI color space. To keep hue and saturation of original image and conform to human eye vision effect is the first problem, convert the input image data to HSI color space which include intensity dimensionality. To raise the speed of the algorithm is the second problem, use integral image figure out the average of every pixel intensity value under a certain scale, as local intensity component of the image, and figure out detail intensity component. To adjust the overall image intensity is the third problem, we can get an S type curve according to the original image information, adjust the local intensity component according to the S type curve. To enhance detail information is the fourth problem, adjust the detail intensity component according to the curve designed in advance. The weighted sum of local intensity component after adjusted and detail intensity component after adjusted is final intensity. Converting synthetic intensity and other two dimensionality to output color space can get final processed image.
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi
2014-02-01
We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
A low-cost vector processor boosting compute-intensive image processing operations
NASA Technical Reports Server (NTRS)
Adorf, Hans-Martin
1992-01-01
Low-cost vector processing (VP) is within reach of everyone seriously engaged in scientific computing. The advent of affordable add-on VP-boards for standard workstations complemented by mathematical/statistical libraries is beginning to impact compute-intensive tasks such as image processing. A case in point in the restoration of distorted images from the Hubble Space Telescope. A low-cost implementation is presented of the standard Tarasko-Richardson-Lucy restoration algorithm on an Intel i860-based VP-board which is seamlessly interfaced to a commercial, interactive image processing system. First experience is reported (including some benchmarks for standalone FFT's) and some conclusions are drawn.
Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging
Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.
2014-01-01
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A
2015-11-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.
2015-01-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943
Methods in Astronomical Image Processing
NASA Astrophysics Data System (ADS)
Jörsäter, S.
A Brief Introductory Note History of Astronomical Imaging Astronomical Image Data Images in Various Formats Digitized Image Data Digital Image Data Philosophy of Astronomical Image Processing Properties of Digital Astronomical Images Human Image Processing Astronomical vs. Computer Science Image Processing Basic Tools of Astronomical Image Processing Display Applications Calibration of Intensity Scales Calibration of Length Scales Image Re-shaping Feature Enhancement Noise Suppression Noise and Error Analysis Image Processing Packages: Design of AIPS and MIDAS AIPS MIDAS Reduction of CCD Data Bias Subtraction Clipping Preflash Subtraction Dark Subtraction Flat Fielding Sky Subtraction Extinction Correction Deconvolution Methods Rebinning/Combining Summary and Prospects for the Future
Intensity dependent spread theory
NASA Technical Reports Server (NTRS)
Holben, Richard
1990-01-01
The Intensity Dependent Spread (IDS) procedure is an image-processing technique based on a model of the processing which occurs in the human visual system. IDS processing is relevant to many aspects of machine vision and image processing. For quantum limited images, it produces an ideal trade-off between spatial resolution and noise averaging, performs edge enhancement thus requiring only mean-crossing detection for the subsequent extraction of scene edges, and yields edge responses whose amplitudes are independent of scene illumination, depending only upon the ratio of the reflectance on the two sides of the edge. These properties suggest that the IDS process may provide significant bandwidth reduction while losing only minimal scene information when used as a preprocessor at or near the image plane.
Zheng, Bei; Ge, Xiao-peng; Yu, Zhi-yong; Yuan, Sheng-guang; Zhang, Wen-jing; Sun, Jing-fang
2012-08-01
Atomic force microscope (AFM) fluid imaging was applied to the study of micro-flocculation filtration process and the optimization of micro-flocculation time and the agitation intensity of G values. It can be concluded that AFM fluid imaging proves to be a promising tool in the observation and characterization of floc morphology and the dynamic coagulation processes under aqueous environmental conditions. Through the use of AFM fluid imaging technique, optimized conditions for micro-flocculation time of 2 min and the agitation intensity (G value) of 100 s(-1) were obtained in the treatment of dye-printing industrial tailing wastewater by the micro-flocculation filtration process with a good performance.
Bhatia, Tripta
2018-07-01
Accurate quantitative analysis of image data requires that we distinguish between fluorescence intensity (true signal) and the noise inherent to its measurements to the extent possible. We image multilamellar membrane tubes and beads that grow from defects in the fluid lamellar phase of the lipid 1,2-dioleoyl-sn-glycero-3-phosphocholine dissolved in water and water-glycerol mixtures by using fluorescence confocal polarizing microscope. We quantify image noise and determine the noise statistics. Understanding the nature of image noise also helps in optimizing image processing to detect sub-optical features, which would otherwise remain hidden. We use an image-processing technique "optimum smoothening" to improve the signal-to-noise ratio of features of interest without smearing their structural details. A high SNR renders desired positional accuracy with which it is possible to resolve features of interest with width below optical resolution. Using optimum smoothening, the smallest and the largest core diameter detected is of width [Formula: see text] and [Formula: see text] nm, respectively, discussed in this paper. The image-processing and analysis techniques and the noise modeling discussed in this paper can be used for detailed morphological analysis of features down to sub-optical length scales that are obtained by any kind of fluorescence intensity imaging in the raster mode.
Normalized Temperature Contrast Processing in Flash Infrared Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.
Application of digital image processing techniques to astronomical imagery, 1979
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1979-01-01
Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.
Initial Results from Fitting Resolved Modes using HMI Intensity Observations
NASA Astrophysics Data System (ADS)
Korzennik, Sylvain G.
2017-08-01
The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
Fast Pixel Buffer For Processing With Lookup Tables
NASA Technical Reports Server (NTRS)
Fisher, Timothy E.
1992-01-01
Proposed scheme for buffering data on intensities of picture elements (pixels) of image increases rate or processing beyond that attainable when data read, one pixel at time, from main image memory. Scheme applied in design of specialized image-processing circuitry. Intended to optimize performance of processor in which electronic equivalent of address-lookup table used to address those pixels in main image memory required for processing.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoman; Yu, Biying; Weng, Cuncheng; Li, Hui
2014-11-01
The 632nm wavelength low intensity He-Ne laser was used to irradiated on 15 mice which had skin wound. The dynamic changes and wound healing processes were observed with nonlinear spectral imaging technology. We observed that:(1)The wound healing process was accelerated by the low-level laser therapy(LLLT);(2)The new tissues produced second harmonic generation (SHG) signals. Collagen content and microstructure differed dramatically at different time pointed along the wound healing. Our observation shows that the low intensity He-Ne laser irradiation can accelerate the healing process of skin wound in mice, and SHG imaging technique can be used to observe wound healing process, which is useful for quantitative characterization of wound status during wound healing process.
Some uses of wavelets for imaging dynamic processes in live cochlear structures
NASA Astrophysics Data System (ADS)
Boutet de Monvel, J.
2007-09-01
A variety of image and signal processing algorithms based on wavelet filtering tools have been developed during the last few decades, that are well adapted to the experimental variability typically encountered in live biological microscopy. A number of processing tools are reviewed, that use wavelets for adaptive image restoration and for motion or brightness variation analysis by optical flow computation. The usefulness of these tools for biological imaging is illustrated in the context of the restoration of images of the inner ear and the analysis of cochlear motion patterns in two and three dimensions. I also report on recent work that aims at capturing fluorescence intensity changes associated with vesicle dynamics at synaptic zones of sensory hair cells. This latest application requires one to separate the intensity variations associated with the physiological process under study from the variations caused by motion of the observed structures. A wavelet optical flow algorithm for doing this is presented, and its effectiveness is demonstrated on artificial and experimental image sequences.
NASA Astrophysics Data System (ADS)
Riantana, R.; Arie, B.; Adam, M.; Aditya, R.; Nuryani; Yahya, I.
2017-02-01
One important thing to pay attention for detecting breast cancer is breast temperature changes. Indications symptoms of breast tissue abnormalities marked by a rise in temperature of the breast. Handycam in night vision mode interferences by external infrared can penetrate into the skin better and can make an infrared image becomes clearer. The program is capable to changing images from a camcorder into a night vision thermal image by breaking RGB into Grayscale matrix structure. The matrix rearranged in the new matrix with double data type so that it can be processed into contour color chart to differentiate the distribution of body temperature. In this program are also features of contrast scale setting of the image is processed so that the color can be set as desired. There was Also a contrast adjustment feature inverse scale that is useful to reverse the color scale so that colors can be changed opposite. There is improfile function used to retrieves the intensity values of pixels along a line what we want to show the distribution of intensity in a graph of relationship between the intensity and the pixel coordinates.
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, S.; Yan, F.; Dorn, D.
2012-06-01
Photoluminescence (PL) imaging techniques can be applied to multicrystalline silicon wafers throughout the manufacturing process. Both band-to-band PL and defect-band emissions, which are longer-wavelength emissions from sub-bandgap transitions, are used to characterize wafer quality and defect content on starting multicrystalline silicon wafers and neighboring wafers processed at each step through completion of finished cells. Both PL imaging techniques spatially highlight defect regions that represent dislocations and defect clusters. The relative intensities of these imaged defect regions change with processing. Band-to-band PL on wafers in the later steps of processing shows good correlation to cell quality and performance. The defect bandmore » images show regions that change relative intensity through processing, and better correlation to cell efficiency and reverse-bias breakdown is more evident at the starting wafer stage as opposed to later process steps. We show that thermal processing in the 200 degrees - 400 degrees C range causes impurities to diffuse to different defect regions, changing their relative defect band emissions.« less
Detection of Heating Processes in Coronal Loops by Soft X-ray Spectroscopy
NASA Astrophysics Data System (ADS)
Kawate, Tomoko; Narukage, Noriyuki; Ishikawa, Shin-nosuke; Imada, Shinsuke
2017-08-01
Imaging and Spectroscopic observations in the soft X-ray band will open a new window of the heating/acceleration/transport processes in the solar corona. The soft X-ray spectrum between 0.5 and 10 keV consists of the electron thermal free-free continuum and hot coronal lines such as O VIII, Fe XVII, Mg XI, Si XVII. Intensity of free-free continuum emission is not affected by the population of ions, whereas line intensities especially from highly ionized species have a sensitivity of the timescale of ionization/recombination processes. Thus, spectroscopic observations of both continuum and line intensities have a capability of diagnostics of heating/cooling timescales. We perform a 1D hydrodynamic simulation coupled with the time-dependent ionization, and calculate continuum and line intensities under different heat input conditions in a coronal loop. We also examine the differential emission measure of the coronal loop from the time-integrated soft x-ray spectra. As a result, line intensity shows a departure from the ionization equilibrium and shows different responses depending on the frequency of the heat input. Solar soft X-ray spectroscopic imager will be mounted in the sounding rocket experiment of the Focusing Optics X-ray Solar Imager (FOXSI). This observation will deepen our understanding of heating processes to solve the “coronal heating problem”.
Automatic localization of cochlear implant electrodes in CTs with a limited intensity range
NASA Astrophysics Data System (ADS)
Zhao, Yiyuan; Dawant, Benoit M.; Noble, Jack H.
2017-02-01
Cochlear implants (CIs) are neural prosthetics for treating severe-to-profound hearing loss. Our group has developed an image-guided cochlear implant programming (IGCIP) system that uses image analysis techniques to recommend patientspecific CI processor settings to improve hearing outcomes. One crucial step in IGCIP is the localization of CI electrodes in post-implantation CTs. Manual localization of electrodes requires time and expertise. To automate this process, our group has proposed automatic techniques that have been validated on CTs acquired with scanners that produce images with an extended range of intensity values. However, there are many clinical CTs acquired with a limited intensity range. This limitation complicates the electrode localization process. In this work, we present a pre-processing step for CTs with a limited intensity range and extend the methods we proposed for full intensity range CTs to localize CI electrodes in CTs with limited intensity range. We evaluate our method on CTs of 20 subjects implanted with CI arrays produced by different manufacturers. Our method achieves a mean localization error of 0.21mm. This indicates our method is robust for automatic localization of CI electrodes in different types of CTs, which represents a crucial step for translating IGCIP from research laboratory to clinical use.
Amplitude image processing by diffractive optics.
Cagigal, Manuel P; Valle, Pedro J; Canales, V F
2016-02-22
In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.
Phase retrieval using regularization method in intensity correlation imaging
NASA Astrophysics Data System (ADS)
Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin
2014-11-01
Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition
Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.
Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott
2007-01-01
The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.
Robb, Paul D; Craven, Alan J
2008-12-01
An image processing technique is presented for atomic resolution high-angle annular dark-field (HAADF) images that have been acquired using scanning transmission electron microscopy (STEM). This technique is termed column ratio mapping and involves the automated process of measuring atomic column intensity ratios in high-resolution HAADF images. This technique was developed to provide a fuller analysis of HAADF images than the usual method of drawing single intensity line profiles across a few areas of interest. For instance, column ratio mapping reveals the compositional distribution across the whole HAADF image and allows a statistical analysis and an estimation of errors. This has proven to be a very valuable technique as it can provide a more detailed assessment of the sharpness of interfacial structures from HAADF images. The technique of column ratio mapping is described in terms of a [110]-oriented zinc-blende structured AlAs/GaAs superlattice using the 1 angstroms-scale resolution capability of the aberration-corrected SuperSTEM 1 instrument.
NASA Astrophysics Data System (ADS)
LIN, JYH-WOEI
2012-08-01
Principal Component Analysis (PCA) and image processing are used to determine Total Electron Content (TEC) anomalies in the F-layer of the ionosphere relating to Typhoon Nakri for 29 May, 2008 (UTC). PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00-14:00 UT on 29 May, 2008 when the wind was most intense. Results show that at a height of approximately 150-200 km the TEC anomaly is highly localized; however, it becomes more intense and widespread with height. Potential causes of these results are discussed with emphasis given to acoustic gravity waves caused by wind force.
Metric Aspects of Digital Images and Digital Image Processing.
1984-09-01
produced in a reconstructed digital image. Synthesized aerial photographs were formed by processing a combined elevation and orthophoto data base. These...brightness values h1 and Iion b) a line equation whose two parameters are calculated h12, along with tile borderline that separates the two intensity
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao
2015-12-01
The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.
NASA Astrophysics Data System (ADS)
Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes
2002-05-01
If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.
Color enhancement and image defogging in HSI based on Retinex model
NASA Astrophysics Data System (ADS)
Gao, Han; Wei, Ping; Ke, Jun
2015-08-01
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Semi-automatic mapping for identifying complex geobodies in seismic images
NASA Astrophysics Data System (ADS)
Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid
2017-03-01
Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.
How Digital Image Processing Became Really Easy
NASA Astrophysics Data System (ADS)
Cannon, Michael
1988-02-01
In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.
Data analysis for GOPEX image frames
NASA Technical Reports Server (NTRS)
Levine, B. M.; Shaik, K. S.; Yan, T.-Y.
1993-01-01
The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.
Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU
NASA Astrophysics Data System (ADS)
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
2013-02-01
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Developing image processing meta-algorithms with data mining of multiple metrics.
Leung, Kelvin; Cunha, Alexandre; Toga, A W; Parker, D Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
Martynova, O; Portnova, G; Orlov, I
2016-01-01
According to psychological research erotic images are evaluated in the context of positive emotions as the most intense, most associated with emotional arousal, among the variety of pleasant and unpleasant stimuli. However it is difficult to separate areas of the brain that are related to the general emotional process from the activity of the brain areas involved in neuronal representations of reward system. The purpose of this study was to determine differences in the brain activity using functional magnetic resonance imaging (fMRI) in male subjects in evaluating an intensity of pleasant images, including erotic, or unpleasant and neutral pictures. When comparing the condition with evaluation of the pleasant erotic images with conditions containing neutral or unpleasant stimuli, a significant activation was observed in the posterior cingulate cortex; the prefrontal cortex and the right globus pallidus. An increased activity of the right anterior central gyrus was observed in the conditions related to evaluation of pleasant and neutral stimuli. Thus, in the process of evaluating the intensity of emotional images of an erotic nature the active brain areas were related not only to neuronal representations of emotions, but also to motivations and control system of emotional arousal, which should be taken into account while using erotic pictures as intensive positive emotional stimuli.
Imaging of gaseous oxygen through DFB laser illumination
NASA Astrophysics Data System (ADS)
Cocola, L.; Fedel, M.; Tondello, G.; Poletto, L.
2016-05-01
A Tunable Diode Laser Absorption Spectroscopy setup with Wavelength Modulation has been used together with a synchronous sampling imaging sensor to obtain two-dimensional transmission-mode images of oxygen content. Modulated laser light from a 760nm DFB source has been used to illuminate a scene from the back while image frames were acquired with a high dynamic range camera. Thanks to synchronous timing between the imaging device and laser light modulation, the traditional lock-in approach used in Wavelength Modulation Spectroscopy was replaced by image processing techniques, and many scanning periods were averaged together to allow resolution of small intensity variation over the already weak absorption signals from oxygen absorption band. After proper binning and filtering, the time-domain waveform obtained from each pixel in a set of frames representing the wavelength scan was used as the single detector signal in a traditional TDLAS-WMS setup, and so processed through a software defined digital lock-in demodulation and a second harmonic signal fitting routine. In this way the WMS artifacts of a gas absorption feature were obtained from each pixel together with intensity normalization parameter, allowing a reconstruction of oxygen distribution in a two-dimensional scene regardless from broadband transmitted intensity. As a first demonstration of the effectiveness of this setup, oxygen absorption images of similar containers filled with either oxygen or nitrogen were acquired and processed.
Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables
NASA Astrophysics Data System (ADS)
Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.
2015-12-01
We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.
2015-01-01
Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205
2012 MULTIPHOTON PROCESSES GRC, JUNE 3-8, 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Barry
2012-03-08
The sessions will focus on: Attosecond science; Strong-field processes in molecules and solids; Generation of harmonics and attosecond pulses; Free-electron laser experiments and theory; Ultrafast imaging; Applications of very high intensity lasers; Propagation of intense laser fields.
Automated inspection of hot steel slabs
Martin, R.J.
1985-12-24
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.
Automated inspection of hot steel slabs
Martin, Ronald J.
1985-01-01
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
Developing Image Processing Meta-Algorithms with Data Mining of Multiple Metrics
Cunha, Alexandre; Toga, A. W.; Parker, D. Stott
2014-01-01
People often use multiple metrics in image processing, but here we take a novel approach of mining the values of batteries of metrics on image processing results. We present a case for extending image processing methods to incorporate automated mining of multiple image metric values. Here by a metric we mean any image similarity or distance measure, and in this paper we consider intensity-based and statistical image measures and focus on registration as an image processing problem. We show how it is possible to develop meta-algorithms that evaluate different image processing results with a number of different metrics and mine the results in an automated fashion so as to select the best results. We show that the mining of multiple metrics offers a variety of potential benefits for many image processing problems, including improved robustness and validation. PMID:24653748
NASA Astrophysics Data System (ADS)
Fischer, Peter; Schuegraf, Philipp; Merkle, Nina; Storch, Tobias
2018-04-01
This paper presents a hybrid evolutionary algorithm for fast intensity based matching between satellite imagery from SAR and very high-resolution (VHR) optical sensor systems. The precise and accurate co-registration of image time series and images of different sensors is a key task in multi-sensor image processing scenarios. The necessary preprocessing step of image matching and tie-point detection is divided into a search problem and a similarity measurement. Within this paper we evaluate the use of an evolutionary search strategy for establishing the spatial correspondence between satellite imagery of optical and radar sensors. The aim of the proposed algorithm is to decrease the computational costs during the search process by formulating the search as an optimization problem. Based upon the canonical evolutionary algorithm, the proposed algorithm is adapted for SAR/optical imagery intensity based matching. Extensions are drawn using techniques like hybridization (e.g. local search) and others to lower the number of objective function calls and refine the result. The algorithm significantely decreases the computational costs whilst finding the optimal solution in a reliable way.
Localization of the transverse processes in ultrasound for spinal curvature measurement
NASA Astrophysics Data System (ADS)
Kamali, Shahrokh; Ungi, Tamas; Lasso, Andras; Yan, Christina; Lougheed, Matthew; Fichtinger, Gabor
2017-03-01
PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks such as transverse processes, but as bones have reduced visibility in ultrasound imaging, skeletal landmarks are typically segmented manually, which is an exceedingly laborious and long process. We propose an automatic algorithm to segment and localize the surface of bony areas in the transverse process for scoliosis in ultrasound. METHODS: The algorithm uses cascade of filters to remove low intensity pixels, smooth the image and detect bony edges. By applying first differentiation, candidate bony areas are classified. The average intensity under each area has a correlation with the possibility of a shadow, and areas with strong shadow are kept for bone segmentation. The segmented images are used to reconstruct a 3-D volume to represent the whole spinal structure around the transverse processes. RESULTS: A comparison between the manual ground truth segmentation and the automatic algorithm in 50 images showed 0.17 mm average difference. The time to process all 1,938 images was about 37 Sec. (0.0191 Sec. / Image), including reading the original sequence file. CONCLUSION: Initial experiments showed the algorithm to be sufficiently accurate and fast for segmentation transverse processes in ultrasound for spinal curvature measurement. An extensive evaluation of the method is currently underway on images from a larger patient cohort and using multiple observers in producing ground truth segmentation.
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.
Information granules in image histogram analysis.
Wieclawek, Wojciech
2018-04-01
A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
Is the perception of 3D shape from shading based on assumed reflectance and illumination?
Todd, James T; Egan, Eric J L; Phillips, Flip
2014-01-01
The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination.
Is the perception of 3D shape from shading based on assumed reflectance and illumination?
Todd, James T.; Egan, Eric J. L.; Phillips, Flip
2014-01-01
The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination. PMID:26034561
Fluorescence intensity positivity classification of Hep-2 cells images using fuzzy logic
NASA Astrophysics Data System (ADS)
Sazali, Dayang Farzana Abang; Janier, Josefina Barnachea; May, Zazilah Bt.
2014-10-01
Indirect Immunofluorescence (IIF) is a good standard used for antinuclear autoantibody (ANA) test using Hep-2 cells to determine specific diseases. Different classifier algorithm methods have been proposed in previous works however, there still no valid set as a standard to classify the fluorescence intensity. This paper presents the use of fuzzy logic to classify the fluorescence intensity and to determine the positivity of the Hep-2 cell serum samples. The fuzzy algorithm involves the image pre-processing by filtering the noises and smoothen the image, converting the red, green and blue (RGB) color space of images to luminosity layer, chromaticity layer "a" and "b" (LAB) color space where the mean value of the lightness and chromaticity layer "a" was extracted and classified by using fuzzy logic algorithm based on the standard score ranges of antinuclear autoantibody (ANA) fluorescence intensity. Using 100 data sets of positive and intermediate fluorescence intensity for testing the performance measurements, the fuzzy logic obtained an accuracy of intermediate and positive class as 85% and 87% respectively.
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
Digital image processing for information extraction.
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1973-01-01
The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.
CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction
Zhen, Xin; Gu, Xuejun; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.
2012-01-01
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons. PMID:23032638
Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images
NASA Astrophysics Data System (ADS)
Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk
2007-02-01
The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D; Gach, H; Li, H
Purpose: The daily treatment MRIs acquired on MR-IGRT systems, like diagnostic MRIs, suffer from intensity inhomogeneity issue, associated with B1 and B0 inhomogeneities. An improved homomorphic unsharp mask (HUM) filtering method, automatic and robust body segmentation, and imaging field-of-view (FOV) detection methods were developed to compute the multiplicative slow-varying correction field and correct the intensity inhomogeneity. The goal is to improve and normalize the voxel intensity so that the images could be processed more accurately by quantitative methods (e.g., segmentation and registration) that require consistent image voxel intensity values. Methods: HUM methods have been widely used for years. A bodymore » mask is required, otherwise the body surface in the corrected image would be incorrectly bright due to the sudden intensity transition at the body surface. In this study, we developed an improved HUM-based correction method that includes three main components: 1) Robust body segmentation on the normalized image gradient map, 2) Robust FOV detection (needed for body segmentation) using region growing and morphologic filters, and 3) An effective implementation of HUM using repeated Gaussian convolution. Results: The proposed method was successfully tested on patient images of common anatomical sites (H/N, lung, abdomen and pelvis). Initial qualitative comparisons showed that this improved HUM method outperformed three recently published algorithms (FCM, LEMS, MICO) in both computation speed (by 50+ times) and robustness (in intermediate to severe inhomogeneity situations). Currently implemented in MATLAB, it takes 20 to 25 seconds to process a 3D MRI volume. Conclusion: Compared to more sophisticated MRI inhomogeneity correction algorithms, the improved HUM method is simple and effective. The inhomogeneity correction, body mask, and FOV detection methods developed in this study would be useful as preprocessing tools for many MRI-related research and clinical applications in radiotherapy. Authors have received research grants from ViewRay and Varian.« less
A natural-color mapping for single-band night-time image based on FPGA
NASA Astrophysics Data System (ADS)
Wang, Yilun; Qian, Yunsheng
2018-01-01
A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
Interactive brain shift compensation using GPU based programming
NASA Astrophysics Data System (ADS)
van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf
2009-02-01
Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.
The correlation study of parallel feature extractor and noise reduction approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria
2015-05-15
This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less
Hiding Information Using different lighting Color images
NASA Astrophysics Data System (ADS)
Majead, Ahlam; Awad, Rash; Salman, Salema S.
2018-05-01
The host medium for the secret message is one of the important principles for the designers of steganography method. In this study, the best color image was studied to carrying any secret image.The steganography approach based Lifting Wavelet Transform (LWT) and Least Significant Bits (LSBs) substitution. The proposed method offers lossless and unnoticeable changes in the contrast carrier color image and imperceptible by human visual system (HVS), especially the host images which was captured in dark lighting conditions. The aim of the study was to study the process of masking the data in colored images with different light intensities. The effect of the masking process was examined on the images that are classified by a minimum distance and the amount of noise and distortion in the image. The histogram and statistical characteristics of the cover image the results showed the efficient use of images taken with different light intensities in hiding data using the least important bit substitution method. This method succeeded in concealing textual data without distorting the original image (low light) Lire developments due to the concealment process.The digital image segmentation technique was used to distinguish small areas with masking. The result is that smooth homogeneous areas are less affected as a result of hiding comparing with high light areas. It is possible to use dark color images to send any secret message between two persons for the purpose of secret communication with good security.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
A CT and MRI scan to MCNP input conversion program.
Van Riper, Kenneth A
2005-01-01
We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.
NASA Astrophysics Data System (ADS)
Parker, D. G.; Ulrich, R. K.; Beck, J.
2014-12-01
We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
A simple 2D composite image analysis technique for the crystal growth study of L-ascorbic acid.
Kumar, Krishan; Kumar, Virender; Lal, Jatin; Kaur, Harmeet; Singh, Jasbir
2017-06-01
This work was destined for 2D crystal growth studies of L-ascorbic acid using the composite image analysis technique. Growth experiments on the L-ascorbic acid crystals were carried out by standard (optical) microscopy, laser diffraction analysis, and composite image analysis. For image analysis, the growth of L-ascorbic acid crystals was captured as digital 2D RGB images, which were then processed to composite images. After processing, the crystal boundaries emerged as white lines against the black (cancelled) background. The crystal boundaries were well differentiated by peaks in the intensity graphs generated for the composite images. The lengths of crystal boundaries measured from the intensity graphs of composite images were in good agreement (correlation coefficient "r" = 0.99) with the lengths measured by standard microscopy. On the contrary, the lengths measured by laser diffraction were poorly correlated with both techniques. Therefore, the composite image analysis can replace the standard microscopy technique for the crystal growth studies of L-ascorbic acid. © 2017 Wiley Periodicals, Inc.
Targeting Cell Surface Proteins in Molecular Photoacoustic Imaging to Detect Ovarian Cancer Early
2013-07-01
biology, nanotechnology, and imaging technology, molecular imaging utilizes specific probes as contrast agents to visualize cellular processes at the...This reagent was covalently coupled to the oligosaccharides attached to polypeptide side-chains of extracellular membrane proteins on living cells...website. The normal tissue gene expression profile dataset was modified and processed as described by Fang (8) and mean intensities and standard
NASA Astrophysics Data System (ADS)
Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan
2017-08-01
The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
An enhanced fast scanning algorithm for image segmentation
NASA Astrophysics Data System (ADS)
Ismael, Ahmed Naser; Yusof, Yuhanis binti
2015-12-01
Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Combining Image Processing with Signal Processing to Improve Transmitter Geolocation Estimation
2014-03-27
transmitter by searching a grid of possible transmitter locations within the image region. At each evaluated grid point, theoretical TDOA values are computed...requires converting the image to a grayscale intensity image. This allows efficient manipulation of data and ease of comparison among pixel values . The...cluster of redundant y values along the top edge of an ideal rectangle. The same is true for the bottom edge, as well as for the x values along the
An independent software system for the analysis of dynamic MR images.
Torheim, G; Lombardi, M; Rinck, P A
1997-01-01
A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.
Uterus segmentation in dynamic MRI using LBP texture descriptors
NASA Astrophysics Data System (ADS)
Namias, R.; Bellemare, M.-E.; Rahim, M.; Pirró, N.
2014-03-01
Pelvic floor disorders cover pathologies of which physiopathology is not well understood. However cases get prevalent with an ageing population. Within the context of a project aiming at modelization of the dynamics of pelvic organs, we have developed an efficient segmentation process. It aims at alleviating the radiologist with a tedious one by one image analysis. From a first contour delineating the uterus-vagina set, the organ border is tracked along a dynamic mri sequence. The process combines movement prediction, local intensity and texture analysis and active contour geometry control. Movement prediction allows a contour intitialization for next image in the sequence. Intensity analysis provides image-based local contour detection enhanced by local binary pattern (lbp) texture descriptors. Geometry control prohibits self intersections and smoothes the contour. Results show the efficiency of the method with images produced in clinical routine.
Automated image analysis for quantification of reactive oxygen species in plant leaves.
Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta
2016-10-15
The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H 2 O 2 and O 2 - detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification. Copyright © 2016 Elsevier Inc. All rights reserved.
Influence of signal intensity non-uniformity on brain volumetry using an atlas-based method.
Goto, Masami; Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni
2012-01-01
Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials.
Influence of Signal Intensity Non-Uniformity on Brain Volumetry Using an Atlas-Based Method
Abe, Osamu; Miyati, Tosiaki; Kabasawa, Hiroyuki; Takao, Hidemasa; Hayashi, Naoto; Kurosu, Tomomi; Iwatsubo, Takeshi; Yamashita, Fumio; Matsuda, Hiroshi; Mori, Harushi; Kunimatsu, Akira; Aoki, Shigeki; Ino, Kenji; Yano, Keiichi; Ohtomo, Kuni
2012-01-01
Objective Many studies have reported pre-processing effects for brain volumetry; however, no study has investigated whether non-parametric non-uniform intensity normalization (N3) correction processing results in reduced system dependency when using an atlas-based method. To address this shortcoming, the present study assessed whether N3 correction processing provides reduced system dependency in atlas-based volumetry. Materials and Methods Contiguous sagittal T1-weighted images of the brain were obtained from 21 healthy participants, by using five magnetic resonance protocols. After image preprocessing using the Statistical Parametric Mapping 5 software, we measured the structural volume of the segmented images with the WFU-PickAtlas software. We applied six different bias-correction levels (Regularization 10, Regularization 0.0001, Regularization 0, Regularization 10 with N3, Regularization 0.0001 with N3, and Regularization 0 with N3) to each set of images. The structural volume change ratio (%) was defined as the change ratio (%) = (100 × [measured volume - mean volume of five magnetic resonance protocols] / mean volume of five magnetic resonance protocols) for each bias-correction level. Results A low change ratio was synonymous with lower system dependency. The results showed that the images with the N3 correction had a lower change ratio compared with those without the N3 correction. Conclusion The present study is the first atlas-based volumetry study to show that the precision of atlas-based volumetry improves when using N3-corrected images. Therefore, correction for signal intensity non-uniformity is strongly advised for multi-scanner or multi-site imaging trials. PMID:22778560
Quantitative analysis of brain magnetic resonance imaging for hepatic encephalopathy
NASA Astrophysics Data System (ADS)
Syh, Hon-Wei; Chu, Wei-Kom; Ong, Chin-Sing
1992-06-01
High intensity lesions around ventricles have recently been observed in T1-weighted brain magnetic resonance images for patients suffering hepatic encephalopathy. The exact etiology that causes magnetic resonance imaging (MRI) gray scale changes has not been totally understood. The objective of our study was to investigate, through quantitative means, (1) the amount of changes to brain white matter due to the disease process, and (2) the extent and distribution of these high intensity lesions, since it is believed that the abnormality may not be entirely limited to the white matter only. Eleven patients with proven haptic encephalopathy and three normal persons without any evidence of liver abnormality constituted our current data base. Trans-axial, sagittal, and coronal brain MRI were obtained on a 1.5 Tesla scanner. All processing was carried out on a microcomputer-based image analysis system in an off-line manner. Histograms were decomposed into regular brain tissues and lesions. Gray scale ranges coded as lesion were then brought back to original images to identify distribution of abnormality. Our results indicated the disease process involved pallidus, mesencephalon, and subthalamic regions.
Real-Time On-Board Processing Validation of MSPI Ground Camera Images
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.
2010-01-01
The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Incoherent Diffractive Imaging via Intensity Correlations of Hard X Rays
NASA Astrophysics Data System (ADS)
Classen, Anton; Ayyer, Kartik; Chapman, Henry N.; Röhlsberger, Ralf; von Zanthier, Joachim
2017-08-01
Established x-ray diffraction methods allow for high-resolution structure determination of crystals, crystallized protein structures, or even single molecules. While these techniques rely on coherent scattering, incoherent processes like fluorescence emission—often the predominant scattering mechanism—are generally considered detrimental for imaging applications. Here, we show that intensity correlations of incoherently scattered x-ray radiation can be used to image the full 3D arrangement of the scattering atoms with significantly higher resolution compared to conventional coherent diffraction imaging and crystallography, including additional three-dimensional information in Fourier space for a single sample orientation. We present a number of properties of incoherent diffractive imaging that are conceptually superior to those of coherent methods.
NASA Astrophysics Data System (ADS)
Shao, Rongjun; Qiu, Lirong; Yang, Jiamiao; Zhao, Weiqian; Zhang, Xin
2013-12-01
We have proposed the component parameters measuring method based on the differential confocal focusing theory. In order to improve the positioning precision of the laser differential confocal component parameters measurement system (LDDCPMS), the paper provides a data processing method based on tracking light spot. To reduce the error caused by the light point moving in collecting the axial intensity signal, the image centroiding algorithm is used to find and track the center of Airy disk of the images collected by the laser differential confocal system. For weakening the influence of higher harmonic noises during the measurement, Gaussian filter is used to process the axial intensity signal. Ultimately the zero point corresponding to the focus of the objective in a differential confocal system is achieved by linear fitting for the differential confocal axial intensity data. Preliminary experiments indicate that the method based on tracking light spot can accurately collect the axial intensity response signal of the virtual pinhole, and improve the anti-interference ability of system. Thus it improves the system positioning accuracy.
Ares, Manuel
2014-02-01
Here we describe some practical concerns surrounding the scanning of microarray slides that have been hybridized with fluorescent dyes. We use a laser scanner that has two lasers, each set to excite a different fluor, and separate detectors to capture emission from each fluor. The laser passes over an address (position on the scanned surface) and the detectors capture photons emitted from each address. Two superimposed image files are written that carry intensities for each channel for each pixel of the image scan. These are the raw data. Image analysis software is used to identify and summarize the intensities of the pixels that make up each spot. After comparison to background pixels, the processed intensity levels representing the gene expression measurements are associated with the identity of each spot.
Optical image encryption based on real-valued coding and subtracting with the help of QR code
NASA Astrophysics Data System (ADS)
Deng, Xiaopeng
2015-08-01
A novel optical image encryption based on real-valued coding and subtracting is proposed with the help of quick response (QR) code. In the encryption process, the original image to be encoded is firstly transformed into the corresponding QR code, and then the corresponding QR code is encoded into two phase-only masks (POMs) by using basic vector operations. Finally, the absolute values of the real or imaginary parts of the two POMs are chosen as the ciphertexts. In decryption process, the QR code can be approximately restored by recording the intensity of the subtraction between the ciphertexts, and hence the original image can be retrieved without any quality loss by scanning the restored QR code with a smartphone. Simulation results and actual smartphone collected results show that the method is feasible and has strong tolerance to noise, phase difference and ratio between intensities of the two decryption light beams.
New methods of MR image intensity standardization via generalized scale
NASA Astrophysics Data System (ADS)
Madabhushi, Anant; Udupa, Jayaram K.
2005-04-01
Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.
NASA Astrophysics Data System (ADS)
Nicchio, Matheus A.; Nogueira, Francisco C. C.; Balsamo, Fabrizio; Souza, Jorge A. B.; Carvalho, Bruno R. B. M.; Bezerra, Francisco H. R.
2018-02-01
In this work we describe the deformation mechanisms and processes that occurred during the evolution of cataclastic deformation bands developed in the feldspar-rich conglomerates of the Rio do Peixe Basin, NE Brazil. We studied bands with different deformation intensities, ranging from single cm-thick tabular bands to more evolved clustering zones. The chemical identification of cataclastic material within deformation bands was performed using compositional mapping in SEM images, EDX and XRD analyses. Deformation processes were identified by microstructural analysis and by the quantification of comminution intensity, performed using digital image processing. The deformation bands are internally non homogeneous and developed during five evolutionary stages: (1) moderate grain size reduction, grain rotation and grain border comminution; (2) intense grain size reduction with preferential feldspar fragmentation; (3) formation of subparallel C-type slip zones; (4) formation of S-type structures, generating S-C-like fabric; and (5) formation of C‧-type slip zones, generating well-developed foliation that resembles S-C-C‧-type structures in a ductile environment. Such deformation fabric is mostly imparted by the preferential alignment of intensely comminuted feldspar fragments along thin slip zones developed within deformation bands. These processes were purely mechanical (i.e., grain crushing and reorientation). No clays or fluids were involved in such processes.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
Swarm Intelligence for Optimizing Hybridized Smoothing Filter in Image Edge Enhancement
NASA Astrophysics Data System (ADS)
Rao, B. Tirumala; Dehuri, S.; Dileep, M.; Vindhya, A.
In this modern era, image transmission and processing plays a major role. It would be impossible to retrieve information from satellite and medical images without the help of image processing techniques. Edge enhancement is an image processing step that enhances the edge contrast of an image or video in an attempt to improve its acutance. Edges are the representations of the discontinuities of image intensity functions. For processing these discontinuities in an image, a good edge enhancement technique is essential. The proposed work uses a new idea for edge enhancement using hybridized smoothening filters and we introduce a promising technique of obtaining best hybrid filter using swarm algorithms (Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO)) to search for an optimal sequence of filters from among a set of rather simple, representative image processing filters. This paper deals with the analysis of the swarm intelligence techniques through the combination of hybrid filters generated by these algorithms for image edge enhancement.
Imaging articular cartilage using second harmonic generation microscopy
NASA Astrophysics Data System (ADS)
Mansfield, Jessica C.; Winlove, C. Peter; Knapp, Karen; Matcher, Stephen J.
2006-02-01
Sub cellular resolution images of equine articular cartilage have been obtained using both second harmonic generation microscopy (SHGM) and two-photon fluorescence microscopy (TPFM). The SHGM images clearly map the distribution of the collagen II fibers within the extracellular matrix while the TPFM images show the distribution of endogenous two-photon fluorophores in both the cells and the extracellular matrix, highlighting especially the pericellular matrix and bright 2-3μm diameter features within the cells. To investigate the source of TPF in the extracellular matrix experiments have been carried out to see if it may originate from the proteoglycans. Pure solutions of the following proteoglycans hyaluronan, chondroitin sulfate and aggrecan have been imaged, only the aggrecan produced any TPF and here the intensity was not great enough to account for the TPF in the extracellular matrix. Also cartilage samples were subjected to a process to remove proteoglycans and cellular components. After this process the TPF from the samples had decreased by a factor of two, with respect to the SHG intensity.
Apparatus for monitoring crystal growth
Sachs, Emanual M.
1981-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Method of monitoring crystal growth
Sachs, Emanual M.
1982-01-01
A system and method are disclosed for monitoring the growth of a crystalline body from a liquid meniscus in a furnace. The system provides an improved human/machine interface so as to reduce operator stress, strain and fatigue while improving the conditions for observation and control of the growing process. The system comprises suitable optics for forming an image of the meniscus and body wherein the image is anamorphic so that the entire meniscus can be viewed with good resolution in both the width and height dimensions. The system also comprises a video display for displaying the anamorphic image. The video display includes means for enhancing the contrast between any two contrasting points in the image. The video display also comprises a signal averager for averaging the intensity of at least one preselected portions of the image. The value of the average intensity, can in turn be utilized to control the growth of the body. The system and method are also capable of observing and monitoring multiple processes.
Quantum imaging with incoherently scattered light from a free-electron laser
NASA Astrophysics Data System (ADS)
Schneider, Raimund; Mehringer, Thomas; Mercurio, Giuseppe; Wenthaus, Lukas; Classen, Anton; Brenner, Günter; Gorobtsov, Oleg; Benz, Adrian; Bhatti, Daniel; Bocklage, Lars; Fischer, Birgit; Lazarev, Sergey; Obukhov, Yuri; Schlage, Kai; Skopintsev, Petr; Wagner, Jochen; Waldmann, Felix; Willing, Svenja; Zaluzhnyy, Ivan; Wurth, Wilfried; Vartanyants, Ivan A.; Röhlsberger, Ralf; von Zanthier, Joachim
2018-02-01
The advent of accelerator-driven free-electron lasers (FEL) has opened new avenues for high-resolution structure determination via diffraction methods that go far beyond conventional X-ray crystallography methods. These techniques rely on coherent scattering processes that require the maintenance of first-order coherence of the radiation field throughout the imaging procedure. Here we show that higher-order degrees of coherence, displayed in the intensity correlations of incoherently scattered X-rays from an FEL, can be used to image two-dimensional objects with a spatial resolution close to or even below the Abbe limit. This constitutes a new approach towards structure determination based on incoherent processes, including fluorescence emission or wavefront distortions, generally considered detrimental for imaging applications. Our method is an extension of the landmark intensity correlation measurements of Hanbury Brown and Twiss to higher than second order, paving the way towards determination of structure and dynamics of matter in regimes where coherent imaging methods have intrinsic limitations.
The importance of ray pathlengths when measuring objects in maximum intensity projection images.
Schreiner, S; Dawant, B M; Paschal, C B; Galloway, R L
1996-01-01
It is important to understand any process that affects medical data. Once the data have changed from the original form, one must consider the possibility that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but there is often little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms which are commonly used in the post-processing of magnetic resonance (MR) and computed tomography (CT) angiographic data. The appearance of the width of vessels and the extent of malformations such as aneurysms is of interest to clinicians. This study will show how MIP algorithms interact with the shape of the object being projected. MIP's can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width-measuring algorithms which will be discussed. Each projected intensity is dependent upon the pathlength of the ray from which the projected pixel arises. The morphology (shape and intensity profile) of an object will change the pathlength that each ray experiences. This is termed the pathlength effect. In order to demonstrate the pathlength effect, simple computer models of an imaged vessel were created. Additionally, a static MR phantom verified that the derived equation for the projection-plane probability density function (pdf) predicts the projection-plane intensities well (R(2)=0.96). Finally, examples of projections through in vivo MR angiography and CT angiography data are presented.
Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging
NASA Astrophysics Data System (ADS)
Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi
2016-10-01
In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.
Han, Lianghao; Dong, Hua; McClelland, Jamie R; Han, Liangxiu; Hawkes, David J; Barratt, Dean C
2017-07-01
This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Barua, Animesh; Yellapa, Aparna; Bahr, Janice M; Machado, Sergio A; Bitterman, Pincas; Basu, Sanjib; Sharma, Sameer; Abramowicz, Jacques S
2015-07-01
Tumor-associated neoangiogenesis (TAN) is an early event in ovarian cancer (OVCA) development. Increased expression of vascular endothelial growth factor receptor 2 (VEGFR2) by TAN vessels presents a potential target for early detection by ultrasound imaging. The goal of this study was to examine the suitability of VEGFR2-targeted ultrasound contrast agents in detecting spontaneous OVCA in laying hens. Effects of VEGFR2-targeted contrast agents in enhancing the intensity of ultrasound imaging from spontaneous ovarian tumors in hens were examined in a cross-sectional study. Enhancement in the intensity of ultrasound imaging was determined before and after injection of VEGFR2-targeted contrast agents. All ultrasound images were digitally stored and analyzed off-line. Following scanning, ovarian tissues were collected and processed for histology and detection of VEGFR2-expressing microvessels. Enhancement in visualization of ovarian morphology was detected by gray-scale imaging following injection of VEGFR2-targeted contrast agents. Compared with pre-contrast, contrast imaging enhanced the intensities of ultrasound imaging significantly (p < 0.0001) irrespective of the pathological status of ovaries. In contrast to normal hens, the intensity of ultrasound imaging was significantly (p < 0.0001) higher in hens with early stage OVCA and increased further in hens with late stage OVCA. Higher intensities of ultrasound imaging in hens with OVCA were positively correlated with increased (p < 0.0001) frequencies of VEGFR2-expressing microvessels. The results of this study suggest that VEGFR2-targeted contrast agents enhance the visualization of spontaneous ovarian tumors in hens at early and late stages of OVCA. The laying hen may be a suitable model to test new imaging agents and develop targeted therapeutics. © The Author(s) 2014.
Malghem, Jacques; Lecouvet, Frédéric E; François, Robert; Vande Berg, Bruno C; Duprez, Thierry; Cosnard, Guy; Maldague, Baudouin E
2005-02-01
To explain a cause of high signal intensity on T1-weighted MR images in calcified intervertebral disks associated with spinal fusion. Magnetic resonance and radiological examinations of 13 patients were reviewed, presenting one or several intervertebral disks showing a high signal intensity on T1-weighted MR images, associated both with the presence of calcifications in the disks and with peripheral fusion of the corresponding spinal segments. Fusion was due to ligament ossifications (n=8), ankylosing spondylitis (n=4), or posterior arthrodesis (n=1). Imaging files included X-rays and T1-weighted MR images in all cases, T2-weighted MR images in 12 cases, MR images with fat signal suppression in 7 cases, and a CT scan in 1 case. Histological study of a calcified disk from an anatomical specimen of an ankylosed lumbar spine resulting from ankylosing spondylitis was examined. The signal intensity of the disks was similar to that of the bone marrow or of perivertebral fat both on T1-weighted MR images and on all sequences, including those with fat signal suppression. In one of these disks, a strongly negative absorption coefficient was focally measured by CT scan, suggesting a fatty content. The histological examination of the ankylosed calcified disk revealed the presence of well-differentiated bone tissue and fatty marrow within the disk. The high signal intensity of some calcified intervertebral disks on T1-weighted MR images can result from the presence of fatty marrow, probably related to a disk ossification process in ankylosed spines.
Light field creating and imaging with different order intensity derivatives
NASA Astrophysics Data System (ADS)
Wang, Yu; Jiang, Huan
2014-10-01
Microscopic image restoration and reconstruction is a challenging topic in the image processing and computer vision, which can be widely applied to life science, biology and medicine etc. A microscopic light field creating and three dimensional (3D) reconstruction method is proposed for transparent or partially transparent microscopic samples, which is based on the Taylor expansion theorem and polynomial fitting. Firstly the image stack of the specimen is divided into several groups in an overlapping or non-overlapping way along the optical axis, and the first image of every group is regarded as reference image. Then different order intensity derivatives are calculated using all the images of every group and polynomial fitting method based on the assumption that the structure of the specimen contained by the image stack in a small range along the optical axis are possessed of smooth and linear property. Subsequently, new images located any position from which to reference image the distance is Δz along the optical axis can be generated by means of Taylor expansion theorem and the calculated different order intensity derivatives. Finally, the microscopic specimen can be reconstructed in 3D form using deconvolution technology and all the images including both the observed images and the generated images. The experimental results show the effectiveness and feasibility of our method.
Visualization of Middle Ear Ossicles in Elder Subjects with Ultra-short Echo Time MR Imaging.
Naganawa, Shinji; Nakane, Toshiki; Kawai, Hisashi; Taoka, Toshiaki; Suzuki, Kojiro; Iwano, Shingo; Satake, Hiroko; Grodzki, David
2017-04-10
To evaluate the visualization of middle ear ossicles by ultra-short echo time magnetic resonance (MR) imaging at 3T in subjects over 50 years old. Sixty ears from 30 elder patients that underwent surgical or interventional treatment for neurovascular diseases were included (ages: 50-82, median age: 65; 10 men, 20 women). Patients received follow-up MR imaging including routine T 1 - and T 2 -weighted images, time-of-flight MR angiography, and ultra-short echo time imaging (PETRA, pointwise encoding time reduction with radial acquisition). All patients underwent computed tomography (CT) angiography before treatment. Thin-section source CT images were correlated with PETRA images. Scan parameters for PETRA were: TR 3.13, TE 0.07, flip angle 6 degrees, 0.83 × 0.83 × 0.83 mm resolution, 3 min 43 s scan time. Two radiologists retrospectively evaluated the visibility of each ossicular structure as positive or negative using PETRA images. The structures evaluated included the head of the malleus, manubrium of the malleus, body of the incus, long process of the incus, and the stapes. Signal intensity of the ossicles was classified as: between labyrinthine fluid and air, similar to labyrinthine fluid, between labyrinthine fluid and cerebellar parenchyma, or higher than cerebellar parenchyma. In all ears, the body of the incus was visible. The head of the malleus was visualized in 36/60 ears. The manubrium of the malleus and long process of the incus was visualized in 1/60 and 4/60 ears, respectively. The stapes were not visualized in any ear. Signal intensity of the visible structures was between labyrinthine fluid and air in all ears. The body of the incus was consistently visualized with intensity between air and labyrinthine fluid on PETRA images in aged subjects. Poor visualization of the manubrium of the malleus, long process of the incus, and the stapes limits clinical significance of middle ear imaging with current PETRA methods.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Extraction and labeling high-resolution images from PDF documents
NASA Astrophysics Data System (ADS)
Chachra, Suchet K.; Xue, Zhiyun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.
2013-12-01
Accuracy of content-based image retrieval is affected by image resolution among other factors. Higher resolution images enable extraction of image features that more accurately represent the image content. In order to improve the relevance of search results for our biomedical image search engine, Open-I, we have developed techniques to extract and label high-resolution versions of figures from biomedical articles supplied in the PDF format. Open-I uses the open-access subset of biomedical articles from the PubMed Central repository hosted by the National Library of Medicine. Articles are available in XML and in publisher supplied PDF formats. As these PDF documents contain little or no meta-data to identify the embedded images, the task includes labeling images according to their figure number in the article after they have been successfully extracted. For this purpose we use the labeled small size images provided with the XML web version of the article. This paper describes the image extraction process and two alternative approaches to perform image labeling that measure the similarity between two images based upon the image intensity projection on the coordinate axes and similarity based upon the normalized cross-correlation between the intensities of two images. Using image identification based on image intensity projection, we were able to achieve a precision of 92.84% and a recall of 82.18% in labeling of the extracted images.
Automatic analysis and quantification of fluorescently labeled synapses in microscope images
NASA Astrophysics Data System (ADS)
Yona, Shai; Katsman, Alex; Orenbuch, Ayelet; Gitler, Daniel; Yitzhaky, Yitzhak
2011-09-01
The purpose of this work is to classify and quantify synapses and their properties in the cultures of a mouse's hippocampus, from images acquired by a fluorescent microscope. Quantification features include the number of synapses, their intensity and their size characteristics. The images obtained by the microscope contain hundreds to several thousands of synapses with various elliptic-like shape features and intensities. These images also include other features such as glia cells and other biological objects beyond the focus plane; those features reduce the visibility of the synapses and interrupt the segmentation process. The proposed method comprises several steps, including background subtraction, identification of suspected centers of synapses as local maxima of small neighborhoods, evaluation of the tendency of objects to be synapses according to intensity properties at their larger neighborhoods, classification of detected synapses into categories as bulks or single synapses and finally, delimiting the borders of each synapse.
NASA Astrophysics Data System (ADS)
Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan
2016-09-01
The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.
LED-based endoscopic light source for spectral imaging
NASA Astrophysics Data System (ADS)
Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.
2016-03-01
Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.
Geng, Xiaonan; Li, Qiang; Tsui, Pohsiang; Wang, Chiaoyin; Liu, Haoli
2013-09-01
To evaluate the reliability of diagnostic ultrasound-based temperature and elasticity imaging during radiofrequency ablation (RFA) through ex vivo experiments. Procine liver samples (n=7) were employed for RFA experiments with exposures of different power intensities (10 and 50w). The RFA process was monitored by a diagnostic ultrasound imager and the information were postoperatively captured for further temperature and elasticity image analysis. Infrared thermometry was concurrently applied to provide temperature change calibration during the RFA process. Results from this study demonstrated that temperature imaging was valid under 10 W RF exposure (r=0.95), but the ablation zone was no longer consistent with the reference infrared temperature distribution under high RF exposures. The elasticity change could well reflect the ablation zone under a 50 W exposure, whereas under low exposures, the thermal lesion could not be well detected due to the limited range of temperature elevation and incomplete tissue necrosis. Diagnostic ultrasound-based temperature and elastography is valid for monitoring thr RFA process. Temperature estimation can well reflect mild-power RF ablation dynamics, whereas the elastic-change estimation can can well predict the tissue necrosis. This study provide advances toward using diagnostic ultrasound to monitor RFA or other thermal-based interventions.
Reducing Speckle In One-Look SAR Images
NASA Technical Reports Server (NTRS)
Nathan, K. S.; Curlander, J. C.
1990-01-01
Local-adaptive-filter algorithm incorporated into digital processing of synthetic-aperture-radar (SAR) echo data to reduce speckle in resulting imagery. Involves use of image statistics in vicinity of each picture element, in conjunction with original intensity of element, to estimate brightness more nearly proportional to true radar reflectance of corresponding target. Increases ratio of signal to speckle noise without substantial degradation of resolution common to multilook SAR images. Adapts to local variations of statistics within scene, preserving subtle details. Computationally simple. Lends itself to parallel processing of different segments of image, making possible increased throughput.
An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement.
Zimmerman, J B; Pizer, S M; Staab, E V; Perry, J R; McCartney, W; Brenton, B C
1988-01-01
Adaptive histogram equalization (AHE) and intensity windowing have been compared using psychophysical observer studies. Experienced radiologists were shown clinical CT (computerized tomographic) images of the chest. Into some of the images, appropriate artificial lesions were introduced; the physicians were then shown the images processed with both AHE and intensity windowing. They were asked to assess the probability that a given image contained the artificial lesion, and their accuracy was measured. The results of these experiments show that for this particular diagnostic task, there was no significant difference in the ability of the two methods to depict luminance contrast; thus, further evaluation of AHE using controlled clinical trials is indicated.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
Phase retrieval by coherent modulation imaging.
Zhang, Fucai; Chen, Bo; Morrison, Graeme R; Vila-Comamala, Joan; Guizar-Sicairos, Manuel; Robinson, Ian K
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit wave. This coherent modulation imaging method removes inherent ambiguities of coherent diffraction imaging and uses a reliable, rapidly converging iterative algorithm involving three planes. It works for extended samples, does not require tight support for convergence and relaxes dynamic range requirements on the detector. Coherent modulation imaging provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free-electron lasers.
2007-12-12
Like Earth, Saturn has an invisible ring of energetic ions trapped in its magnetic field. This feature is known as a "ring current." This ring current has been imaged with a special camera on Cassini sensitive to energetic neutral atoms. This is a false color map of the intensity of the energetic neutral atoms emitted from the ring current through a processed called charged exchange. In this process a trapped energetic ion steals and electron from cold gas atoms and becomes neutral and escapes the magnetic field. The Cassini Magnetospheric Imaging Instrument's ion and neutral camera records the intensity of the escaping particles, which provides a map of the ring current. In this image, the colors represent the intensity of the neutral emission, which is a reflection of the trapped ions. This "ring" is much farther from Saturn (roughly five times farther) than Saturn's famous icy rings. Red in the image represents the higher intensity of the particles, while blue is less intense. Saturn's ring current had not been mapped before on a global scale, only "snippets" or areas were mapped previously but not in this detail. This instrument allows scientists to produce movies (see PIA10083) that show how this ring changes over time. These movies reveal a dynamic system, which is usually not as uniform as depicted in this image. The ring current is doughnut shaped but in some instances it appears as if someone took a bite out of it. This image was obtained on March 19, 2007, at a latitude of about 54.5 degrees and radial distance 1.5 million kilometres (920,000 miles). Saturn is at the center, and the dotted circles represent the orbits of the moon's Rhea and Titan. The Z axis points parallel to Saturn's spin axis, the X axis points roughly sunward in the sun-spin axis plane, and the Y axis completes the system, pointing roughly toward dusk. The ion and neutral camera's field of view is marked by the white line and accounts for the cut-off of the image on the left. The image is an average of the activity over a (roughly) 3-hour period. http://photojournal.jpl.nasa.gov/catalog/PIA10094
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka
2017-01-01
Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.
Ang, Dan B; Angelopoulos, Christos; Katz, Jerald O
2006-11-01
The goals of this in vitro study were to determine the effect of signal fading of DenOptix photo-stimulable storage phosphor imaging plates scanned with a delay and to determine the effect on the diagnostic quality of the image. In addition, we sought to correlate signal fading with image spatial resolution and average pixel intensity values. Forty-eight images were obtained of a test specimen apparatus and scanned at 6 delayed time intervals: immediately scanned, 1 hour, 8 hours, 24 hours, 72 hours, and 168 hours. Six general dentists using Vixwin2000 software performed a measuring task to determine the location of an endodontic file tip and root apex. One-way ANOVA with repeated measures was used to determine the effect of signal fading (delayed scan time) on diagnostic image quality and average pixel intensity value. There was no statistically significant difference in diagnostic image quality resulting from signal fading. No difference was observed in spatial resolution of the images. There was a statistically significant difference in the pixel intensity analysis of an 8-step aluminum wedge between immediate scanning and 24-hour delayed scan time. There was an effect of delayed scanning on the average pixel intensity value. However, there was no effect on image quality and raters' ability to perform a clinical identification task. Proprietary software of the DenOptix digital imaging system demonstrates an excellent ability to process a delayed scan time signal and create an image of diagnostic quality.
Optical image encryption method based on incoherent imaging and polarized light encoding
NASA Astrophysics Data System (ADS)
Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C.
2018-05-01
We propose an incoherent encoding system for image encryption based on a polarized encoding method combined with an incoherent imaging. Incoherent imaging is the core component of this proposal, in which the incoherent point-spread function (PSF) of the imaging system serves as the main key to encode the input intensity distribution thanks to a convolution operation. An array of retarders and polarizers is placed on the input plane of the imaging structure to encrypt the polarized state of light based on Mueller polarization calculus. The proposal makes full use of randomness of polarization parameters and incoherent PSF so that a multidimensional key space is generated to deal with illegal attacks. Mueller polarization calculus and incoherent illumination of imaging structure ensure that only intensity information is manipulated. Another key advantage is that complicated processing and recording related to a complex-valued signal are avoided. The encoded information is just an intensity distribution, which is advantageous for data storage and transition because information expansion accompanying conventional encryption methods is also avoided. The decryption procedure can be performed digitally or using optoelectronic devices. Numerical simulation tests demonstrate the validity of the proposed scheme.
Accelerated Adaptive MGS Phase Retrieval
NASA Technical Reports Server (NTRS)
Lam, Raymond K.; Ohara, Catherine M.; Green, Joseph J.; Bikkannavar, Siddarayappa A.; Basinger, Scott A.; Redding, David C.; Shi, Fang
2011-01-01
The Modified Gerchberg-Saxton (MGS) algorithm is an image-based wavefront-sensing method that can turn any science instrument focal plane into a wavefront sensor. MGS characterizes optical systems by estimating the wavefront errors in the exit pupil using only intensity images of a star or other point source of light. This innovative implementation of MGS significantly accelerates the MGS phase retrieval algorithm by using stream-processing hardware on conventional graphics cards. Stream processing is a relatively new, yet powerful, paradigm to allow parallel processing of certain applications that apply single instructions to multiple data (SIMD). These stream processors are designed specifically to support large-scale parallel computing on a single graphics chip. Computationally intensive algorithms, such as the Fast Fourier Transform (FFT), are particularly well suited for this computing environment. This high-speed version of MGS exploits commercially available hardware to accomplish the same objective in a fraction of the original time. The exploit involves performing matrix calculations in nVidia graphic cards. The graphical processor unit (GPU) is hardware that is specialized for computationally intensive, highly parallel computation. From the software perspective, a parallel programming model is used, called CUDA, to transparently scale multicore parallelism in hardware. This technology gives computationally intensive applications access to the processing power of the nVidia GPUs through a C/C++ programming interface. The AAMGS (Accelerated Adaptive MGS) software takes advantage of these advanced technologies, to accelerate the optical phase error characterization. With a single PC that contains four nVidia GTX-280 graphic cards, the new implementation can process four images simultaneously to produce a JWST (James Webb Space Telescope) wavefront measurement 60 times faster than the previous code.
Imaging System With Confocally Self-Detecting Laser.
Webb, Robert H.; Rogomentich, Fran J.
1996-10-08
The invention relates to a confocal laser imaging system and method. The system includes a laser source, a beam splitter, focusing elements, and a photosensitive detector. The laser source projects a laser beam along a first optical path at an object to be imaged, and modulates the intensity of the projected laser beam in response to light reflected from the object. A beam splitter directs a portion of the projected laser beam onto a photodetector. The photodetector monitors the intensity of laser output. The laser source can be an electrically scannable array, with a lens or objective assembly for focusing light generated by the array onto the object of interest. As the array is energized, its laser beams scan over the object, and light reflected at each point is returned by the lens to the element of the array from which it originated. A single photosensitive detector element can generate an intensity-representative signal for all lasers of the array. The intensity-representative signal from the photosensitive detector can be processed to provide an image of the object of interest.
Multimodal computational microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2016-12-01
Transport of intensity equation (TIE) is a powerful tool for phase retrieval and quantitative phase imaging, which requires intensity measurements only at axially closely spaced planes without a separate reference beam. It does not require coherent illumination and works well on conventional bright-field microscopes. The quantitative phase reconstructed by TIE gives valuable information that has been encoded in the complex wave field by passage through a sample of interest. Such information may provide tremendous flexibility to emulate various microscopy modalities computationally without requiring specialized hardware components. We develop a requisite theory to describe such a hybrid computational multimodal imaging system, which yields quantitative phase, Zernike phase contrast, differential interference contrast, and light field moment imaging, simultaneously. It makes the various observations for biomedical samples easy. Then we give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens-based TIE system, combined with the appropriate postprocessing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
Detection technology of polarization target based on curvelet transform in turbid liquid
NASA Astrophysics Data System (ADS)
Zhang, Su; Duan, Jin; Fu, Qiang; Zhan, Juntong; Ma, Wanzhuo
2015-08-01
To suppress the interference of the target detecting in the turbid medium, a kind of polarization detection technology based on Curvelet transform was applied. This method firstly adjusts the angles of polarizing film to get the intensity images of the situations at 0°,60° and 120°, then deduces the images of Stokes vectors, degree of polarization (DOP) and polarization angle (PA) according to the Mueller matrix. At last the DOP and intensity images can be decomposed by Curvelet transform to realize the fusion of the high and low coefficients respectively, after the processed coefficients are reconstructed, the target which is easier to detect can be achieved. To prove this method, many targets in turbid medium have been detected by polarization method and fused their DOP and intensity images with Curvelet transform algorithm. As an example screws in moderate and high concentration liquid are presented respectively, from which we can see the unpolarized targets are less obvious in higher concentration liquid. When the DOP and intensity images are fused by Curvelet transform, the targets are emerged clearly out of the turbid medium, and the values of the quality evaluation parameters in clarity, degree of contract and spatial frequency are prominently enhanced comparing with the unpolarized images, which can show the feasibility of this method.
MRI-based quantification of Duchenne muscular dystrophy in a canine model
NASA Astrophysics Data System (ADS)
Wang, Jiahui; Fan, Zheng; Kornegay, Joe N.; Styner, Martin A.
2011-03-01
Duchenne muscular dystrophy (DMD) is a progressive and fatal X-linked disease caused by mutations in the DMD gene. Magnetic resonance imaging (MRI) has shown potential to provide non-invasive and objective biomarkers for monitoring disease progression and therapeutic effect in DMD. In this paper, we propose a semi-automated scheme to quantify MRI features of golden retriever muscular dystrophy (GRMD), a canine model of DMD. Our method was applied to a natural history data set and a hydrodynamic limb perfusion data set. The scheme is composed of three modules: pre-processing, muscle segmentation, and feature analysis. The pre-processing module includes: calculation of T2 maps, spatial registration of T2 weighted (T2WI) images, T2 weighted fat suppressed (T2FS) images, and T2 maps, and intensity calibration of T2WI and T2FS images. We then manually segment six pelvic limb muscles. For each of the segmented muscles, we finally automatically measure volume and intensity statistics of the T2FS images and T2 maps. For the natural history study, our results showed that four of six muscles in affected dogs had smaller volumes and all had higher mean intensities in T2 maps as compared to normal dogs. For the perfusion study, the muscle volumes and mean intensities in T2FS were increased in the post-perfusion MRI scans as compared to pre-perfusion MRI scans, as predicted. We conclude that our scheme successfully performs quantitative analysis of muscle MRI features of GRMD.
NASA Astrophysics Data System (ADS)
Eckert, Hann-Jörg; Petrášek, Zdeněk; Kemnitz, Klaus
2006-10-01
Picosecond fluorescence lifetime imaging microscopy (FLIM) provides a most valuable tool to analyze the primary processes of photosynthesis in individual cells and chloroplasts of living cells. In order to obtain correct lifetimes of the excited states, the peak intensity of the exciting laser pulses as well as the average intensity has to be sufficiently low to avoid distortions of the kinetics by processes such as singlet-singlet annihilation, closing of the reaction centers or photoinhibition. In the present study this requirement is achieved by non-scanning wide-field FLIM based on time- and space-correlated single-photon counting (TSCSPC) using a novel microchannel plate photomultiplier with quadrant anode (QA-MCP) that allows parallel acquisition of time-resolved images under minimally invasive low-excitation conditions. The potential of the wide-field TCSPC method is demonstrated by presenting results obtained from measurements of the fluorescence dynamics in individual chloroplasts of moss leaves and living cells of the chlorophyll d-containing cyanobacterium Acaryochloris marina.
Texture Feature Analysis for Different Resolution Level of Kidney Ultrasound Images
NASA Astrophysics Data System (ADS)
Kairuddin, Wan Nur Hafsha Wan; Mahmud, Wan Mahani Hafizah Wan
2017-08-01
Image feature extraction is a technique to identify the characteristic of the image. The objective of this work is to discover the texture features that best describe a tissue characteristic of a healthy kidney from ultrasound (US) image. Three ultrasound machines that have different specifications are used in order to get a different quality (different resolution) of the image. Initially, the acquired images are pre-processed to de-noise the speckle to ensure the image preserve the pixels in a region of interest (ROI) for further extraction. Gaussian Low- pass Filter is chosen as the filtering method in this work. 150 of enhanced images then are segmented by creating a foreground and background of image where the mask is created to eliminate some unwanted intensity values. Statistical based texture features method is used namely Intensity Histogram (IH), Gray-Level Co-Occurance Matrix (GLCM) and Gray-level run-length matrix (GLRLM).This method is depends on the spatial distribution of intensity values or gray levels in the kidney region. By using One-Way ANOVA in SPSS, the result indicated that three features (Contrast, Difference Variance and Inverse Difference Moment Normalized) from GLCM are not statistically significant; this concludes that these three features describe a healthy kidney characteristics regardless of the ultrasound image quality.
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
Multiplexed 3D FRET imaging in deep tissue of live embryos
Zhao, Ming; Wan, Xiaoyang; Li, Yu; Zhou, Weibin; Peng, Leilei
2015-01-01
Current deep tissue microscopy techniques are mostly restricted to intensity mapping of fluorophores, which significantly limit their applications in investigating biochemical processes in vivo. We present a deep tissue multiplexed functional imaging method that probes multiple Förster resonant energy transfer (FRET) sensors in live embryos with high spatial resolution. The method simultaneously images fluorescence lifetimes in 3D with multiple excitation lasers. Through quantitative analysis of triple-channel intensity and lifetime images, we demonstrated that Ca2+ and cAMP levels of live embryos expressing dual FRET sensors can be monitored simultaneously at microscopic resolution. The method is compatible with a broad range of FRET sensors currently available for probing various cellular biochemical functions. It opens the door to imaging complex cellular circuitries in whole live organisms. PMID:26387920
Image Processing for Bioluminescence Resonance Energy Transfer Measurement-BRET-Analyzer.
Chastagnier, Yan; Moutin, Enora; Hemonnot, Anne-Laure; Perroy, Julie
2017-01-01
A growing number of tools now allow live recordings of various signaling pathways and protein-protein interaction dynamics in time and space by ratiometric measurements, such as Bioluminescence Resonance Energy Transfer (BRET) Imaging. Accurate and reproducible analysis of ratiometric measurements has thus become mandatory to interpret quantitative imaging. In order to fulfill this necessity, we have developed an open source toolset for Fiji- BRET-Analyzer -allowing a systematic analysis, from image processing to ratio quantification. We share this open source solution and a step-by-step tutorial at https://github.com/ychastagnier/BRET-Analyzer. This toolset proposes (1) image background subtraction, (2) image alignment over time, (3) a composite thresholding method of the image used as the denominator of the ratio to refine the precise limits of the sample, (4) pixel by pixel division of the images and efficient distribution of the ratio intensity on a pseudocolor scale, and (5) quantification of the ratio mean intensity and standard variation among pixels in chosen areas. In addition to systematize the analysis process, we show that the BRET-Analyzer allows proper reconstitution and quantification of the ratiometric image in time and space, even from heterogeneous subcellular volumes. Indeed, analyzing twice the same images, we demonstrate that compared to standard analysis BRET-Analyzer precisely define the luminescent specimen limits, enlightening proficient strengths from small and big ensembles over time. For example, we followed and quantified, in live, scaffold proteins interaction dynamics in neuronal sub-cellular compartments including dendritic spines, for half an hour. In conclusion, BRET-Analyzer provides a complete, versatile and efficient toolset for automated reproducible and meaningful image ratio analysis.
Optical diagnostics of mercury jet for an intense proton target.
Park, H; Tsang, T; Kirk, H G; Ladeinde, F; Graves, V B; Spampinato, P T; Carroll, A J; Titus, P H; McDonald, K T
2008-04-01
An optical diagnostic system is designed and constructed for imaging a free mercury jet interacting with a high intensity proton beam in a pulsed high-field solenoid magnet. The optical imaging system employs a backilluminated, laser shadow photography technique. Object illumination and image capture are transmitted through radiation-hard multimode optical fibers and flexible coherent imaging fibers. A retroreflected illumination design allows the entire passive imaging system to fit inside the bore of the solenoid magnet. A sequence of synchronized short laser light pulses are used to freeze the transient events, and the images are recorded by several high speed charge coupled devices. Quantitative and qualitative data analysis using image processing based on probability approach is described. The characteristics of free mercury jet as a high power target for beam-jet interaction at various levels of the magnetic induction field is reported in this paper.
NASA Astrophysics Data System (ADS)
Gong, Li-Hua; He, Xiang-Tao; Tan, Ru-Chao; Zhou, Zhi-Hong
2018-01-01
In order to obtain high-quality color images, it is important to keep the hue component unchanged while emphasize the intensity or saturation component. As a public color model, Hue-Saturation Intensity (HSI) model is commonly used in image processing. A new single channel quantum color image encryption algorithm based on HSI model and quantum Fourier transform (QFT) is investigated, where the color components of the original color image are converted to HSI and the logistic map is employed to diffuse the relationship of pixels in color components. Subsequently, quantum Fourier transform is exploited to fulfill the encryption. The cipher-text is a combination of a gray image and a phase matrix. Simulations and theoretical analyses demonstrate that the proposed single channel quantum color image encryption scheme based on the HSI model and quantum Fourier transform is secure and effective.
Chang, C F; Williams, R C; Grano, D A; Downing, K H; Glaeser, R M
1983-01-01
This study investigates the causes of the apparent differences between the optical diffraction pattern of a micrograph of a Tobacco Mosaic Virus (TMV) particle, the optical diffraction pattern of a ten-fold photographically averaged image, and the computed diffraction pattern of the original micrograph. Peak intensities along the layer lines in the transform of the averaged image appear to be quite unlike those in the diffraction pattern of the original micrograph, and the diffraction intensities for the averaged image extend to unexpectedly high resolution. A carefully controlled, quantitative comparison reveals, however, that the optical diffraction pattern of the original micrograph and that of the ten-fold averaged image are essentially equivalent. Using computer-based image processing, we discovered that the peak intensities on the 6th layer line have values very similar in magnitude to the neighboring noise, in contrast to what was expected from the optical diffraction pattern of the original micrograph. This discrepancy was resolved by recording a series of optical diffraction patterns when the original micrograph was immersed in oil. These patterns revealed the presence of a substantial phase grating effect, which exaggerated the peak intensities on the 6th layer line, causing an erroneous impression that the high resolution features possessed a good signal-to-noise ratio. This study thus reveals some pitfalls and misleading results that can be encountered when using optical diffraction patterns to evaluate image quality.
High-contrast multilayer imaging of biological organisms through dark-field digital refocusing.
Faridian, Ahmad; Pedrini, Giancarlo; Osten, Wolfgang
2013-08-01
We have developed an imaging system to extract high contrast images from different layers of biological organisms. Utilizing a digital holographic approach, the system works without scanning through layers of the specimen. In dark-field illumination, scattered light has the main contribution in image formation, but in the case of coherent illumination, this creates a strong speckle noise that reduces the image quality. To remove this restriction, the specimen has been illuminated with various speckle-fields and a hologram has been recorded for each speckle-field. Each hologram has been analyzed separately and the corresponding intensity image has been reconstructed. The final image has been derived by averaging over the reconstructed images. A correlation approach has been utilized to determine the number of speckle-fields required to achieve a desired contrast and image quality. The reconstructed intensity images in different object layers are shown for different sea urchin larvae. Two multimedia files are attached to illustrate the process of digital focusing.
Statistical normalization techniques for magnetic resonance imaging.
Shinohara, Russell T; Sweeney, Elizabeth M; Goldsmith, Jeff; Shiee, Navid; Mateen, Farrah J; Calabresi, Peter A; Jarso, Samson; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M
2014-01-01
While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.
Fluorescence Imaging Reveals Surface Contamination
NASA Technical Reports Server (NTRS)
Schirato, Richard; Polichar, Raulf
1992-01-01
In technique to detect surface contamination, object inspected illuminated by ultraviolet light to make contaminants fluoresce; low-light-level video camera views fluorescence. Image-processing techniques quantify distribution of contaminants. If fluorescence of material expected to contaminate surface is not intense, tagged with low concentration of dye.
Nonlinear Optical Image Processing with Bacteriorhodopsin Films
NASA Technical Reports Server (NTRS)
Downie, John D.; Deiss, Ron (Technical Monitor)
1994-01-01
The transmission properties of some bacteriorhodopsin film spatial light modulators are uniquely suited to allow nonlinear optical image processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude transmission feature of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. The bacteriorhodopsin film displays the logarithmic amplitude response for write beam intensities spanning a dynamic range greater than 2.0 orders of magnitude. We present experimental results demonstrating the principle and capability for several different image and noise situations, including deterministic noise and speckle. Using the bacteriorhodopsin film, we successfully filter out image noise from the transformed image that cannot be removed from the original image.
a Novel Ihs-Ga Fusion Method Based on Enhancement Vegetated Area
NASA Astrophysics Data System (ADS)
Niazi, S.; Mokhtarzade, M.; Saeedzadeh, F.
2015-12-01
Pan sharpening methods aim to produce a more informative image containing the positive aspects of both source images. However, the pan sharpening process usually introduces some spectral and spatial distortions in the resulting fused image. The amount of these distortions varies highly depending on the pan sharpening technique as well as the type of data. Among the existing pan sharpening methods, the Intensity-Hue-Saturation (IHS) technique is the most widely used for its efficiency and high spatial resolution. When the IHS method is used for IKONOS or QuickBird imagery, there is a significant color distortion which is mainly due to the wavelengths range of the panchromatic image. Regarding the fact that in the green vegetated regions panchromatic gray values are much larger than the gray values of intensity image. A novel method is proposed which spatially adjusts the intensity image in vegetated areas. To do so the normalized difference vegetation index (NDVI) is used to identify vegetation areas where the green band is enhanced according to the red and NIR bands. In this way an intensity image is obtained in which the gray values are comparable to the panchromatic image. Beside the genetic optimization algorithm is used to find the optimum weight parameters in order to gain the best intensity image. Visual and statistical analysis proved the efficiency of the proposed method as it significantly improved the fusion quality in comparison to conventional IHS technique. The accuracy of the proposed pan sharpening technique was also evaluated in terms of different spatial and spectral metrics. In this study, 7 metrics (Correlation Coefficient, ERGAS, RASE, RMSE, SAM, SID and Spatial Coefficient) have been used in order to determine the quality of the pan-sharpened images. Experiments were conducted on two different data sets obtained by two different imaging sensors, IKONOS and QuickBird. The result of this showed that the evaluation metrics are more promising for our fused image in comparison to other pan sharpening methods.
NASA Astrophysics Data System (ADS)
Downie, John D.
1995-08-01
The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
The transmission properties of some bacteriorhodopsin-film spatial light modulators are uniquely suited to allow nonlinear optical image-processing operations to be applied to images with multiplicative noise characteristics. A logarithmic amplitude-transmission characteristic of the film permits the conversion of multiplicative noise to additive noise, which may then be linearly filtered out in the Fourier plane of the transformed image. I present experimental results demonstrating the principle and the capability for several different image and noise situations, including deterministic noise and speckle. The bacteriorhodopsin film studied here displays the logarithmic transmission response for write intensities spanning a dynamic range greater than 2 orders of magnitude.
Extraction of line properties based on direction fields.
Kutka, R; Stier, S
1996-01-01
The authors present a new set of algorithms for segmenting lines, mainly blood vessels in X-ray images, and extracting properties such as their intensities, diameters, and center lines. The authors developed a tracking algorithm that checks rules taking the properties of vessels into account. The tools even detect veins, arteries, or catheters of two pixels in diameter and with poor contrast. Compared with other algorithms, such as the Canny line detector or anisotropic diffusion, the authors extract a smoother and connected vessel tree without artifacts in the image background. As the tools depend on common intermediate results, they are very fast when used together. The authors' results will support the 3-D reconstruction of the vessel tree from stereoscopic projections. Moreover, the authors make use of their line intensity measure for enhancing and improving the visibility of vessels in 3-D X-ray images. The processed images are intended to support radiologists in diagnosis, radiation therapy planning, and surgical planning. Radiologists verified the improved quality of the processed images and the enhanced visibility of relevant details, particularly fine blood vessels.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.
Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar
2017-11-03
Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.
NASA Astrophysics Data System (ADS)
Kemp, Z. D. C.
2018-04-01
Determining the phase of a wave from intensity measurements has many applications in fields such as electron microscopy, visible light optics, and medical imaging. Propagation based phase retrieval, where the phase is obtained from defocused images, has shown significant promise. There are, however, limitations in the accuracy of the retrieved phase arising from such methods. Sources of error include shot noise, image misalignment, and diffraction artifacts. We explore the use of artificial neural networks (ANNs) to improve the accuracy of propagation based phase retrieval algorithms applied to simulated intensity measurements. We employ a phase retrieval algorithm based on the transport-of-intensity equation to obtain the phase from simulated micrographs of procedurally generated specimens. We then train an ANN with pairs of retrieved and exact phases, and use the trained ANN to process a test set of retrieved phase maps. The total error in the phase is significantly reduced using this method. We also discuss a variety of potential extensions to this work.
Use of discrete chromatic space to tune the image tone in a color image mosaic
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li
2003-09-01
Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.
NASA Astrophysics Data System (ADS)
Turola, Massimo; Meah, Chris J.; Marshall, Richard J.; Styles, Iain B.; Gruppetta, Stephen
2015-06-01
A plenoptic imaging system records simultaneously the intensity and the direction of the rays of light. This additional information allows many post processing features such as 3D imaging, synthetic refocusing and potentially evaluation of wavefront aberrations. In this paper the effects of low order aberrations on a simple plenoptic imaging system have been investigated using a wave optics simulations approach.
NASA Astrophysics Data System (ADS)
Song, Changyong
2017-05-01
Interest in high-resolution structure investigation has been zealous, especially with the advent of X-ray free electron lasers (XFELs). The intense and ultra-short X-ray laser pulses ( 10 GW) pave new routes to explore structures and dynamics of single macromolecules, functional nanomaterials and complex electronic materials. In the last several years, we have developed XFEL single-shot diffraction imaging by probing ultrafast phase changes directly. Pump-probe single-shot imaging was realized by synchronizing femtosecond (<10 fs in FWHM) X-ray laser (probe) with femtosecond (50 fs) IR laser (pump) at better than 1 ps resolution. Nanoparticles under intense fs-laser pulses were investigated with fs XFEL pulses to provide insight into the irreversible particle damage processes with nanoscale resolution. Research effort, introduced, aims to extend the current spatio-temporal resolution beyond the present limit. We expect this single-shot dynamic imaging to open new science opportunity with XFELs.
Johnston-Peck, Aaron C; Winterstein, Jonathan P; Roberts, Alan D; DuChene, Joseph S; Qian, Kun; Sweeny, Brendan C; Wei, Wei David; Sharma, Renu; Stach, Eric A; Herzing, Andrew A
2016-03-01
Low-angle annular dark field (LAADF) scanning transmission electron microscopy (STEM) imaging is presented as a method that is sensitive to the oxidation state of cerium ions in CeO2 nanoparticles. This relationship was validated through electron energy loss spectroscopy (EELS), in situ measurements, as well as multislice image simulations. Static displacements caused by the increased ionic radius of Ce(3+) influence the electron channeling process and increase electron scattering to low angles while reducing scatter to high angles. This process manifests itself by reducing the high-angle annular dark field (HAADF) signal intensity while increasing the LAADF signal intensity in close proximity to Ce(3+) ions. This technique can supplement STEM-EELS and in so doing, relax the experimental challenges associated with acquiring oxidation state information at high spatial resolutions. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Watson, Andrw B. (Inventor)
2010-01-01
The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image. or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image . Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer. SSO. Some embodiments include masking functions. window functions. special treatment for images lying on or near border and pre-processing of test images.
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
2012-01-01
The present invention relates to devices and methods for the measurement and/or for the specification of the perceptual intensity of a visual image, or the perceptual distance between a pair of images. Grayscale test and reference images are processed to produce test and reference luminance images. A luminance filter function is convolved with the reference luminance image to produce a local mean luminance reference image. Test and reference contrast images are produced from the local mean luminance reference image and the test and reference luminance images respectively, followed by application of a contrast sensitivity filter. The resulting images are combined according to mathematical prescriptions to produce a Just Noticeable Difference, JND value, indicative of a Spatial Standard Observer, SSO. Some embodiments include masking functions, window functions, special treatment for images lying on or near borders and pre-processing of test images.
In vivo assessment of wound re-epithelialization by UV fluorescence excitation imaging
NASA Astrophysics Data System (ADS)
Wang, Ying; Ortega-Martinez, Antonio; Padilla-Martinez, Juan Pablo; Williams, Maura; Farinelli, William; Anderson, R. R.; Franco, Walfre
2017-02-01
Background and Objectives: We have previously demonstrated the efficacy of a non-invasive, non-contact, fast and simple but robust fluorescence imaging (u-FEI) method to monitor the healing of skin wounds in vitro. This system can image highly-proliferating cellular processes (295/340 nm excitation/emission wavelengths) to study epithelialization in a cultured wound model. The objective of the current work is to evaluate the suitability of u-FEI for monitoring wound re-epithelialization in vivo. Study Design: Full-thickness wounds were created in the tail of rats and imaged weekly using u-FEI at 295/340nm excitation/emission wavelengths. Histology was used to investigate the correlation between the spatial distribution and intensity of fluorescence and the extent of wound epithelialization. In addition, the expression of the nuclear protein Ki67 was used to confirm the association between the proliferation of keratinocyte cells and the intensity of fluorescence. Results: Keratinocytes forming neo-epidermis exhibited higher fluorescence intensity than the keratinocytes not involved in re-epithelialization. In full-thickness wounds the fluorescence first appeared at the wound edge where keratinocytes initiated the epithelialization process. Fluorescence intensity increased towards the center as the keratinocytes partially covered the wound. As the wound healed, fluorescence decreased at the edges and was present only at the center as the keratinocytes completely covered the wound at day 21. Histology demonstrated that changes in fluorescence intensity from the 295/340nm band corresponded to newly formed epidermis. Conclusions: u-FEI at 295/340nm allows visualization of proliferating keratinocyte cells during re-epithelialization of wounds in vivo, potentially providing a quantitative, objective and simple method for evaluating wound closure in the clinic.
a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images
NASA Astrophysics Data System (ADS)
Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.
2015-07-01
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
Phase retrieval by coherent modulation imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Phase retrieval by coherent modulation imaging
Zhang, Fucai; Chen, Bo; Morrison, Graeme R.; ...
2016-11-18
Phase retrieval is a long-standing problem in imaging when only the intensity of the wavefield can be recorded. Coherent diffraction imaging (CDI) is a lensless technique that uses iterative algorithms to recover amplitude and phase contrast images from diffraction intensity data. For general samples, phase retrieval from a single diffraction pattern has been an algorithmic and experimental challenge. Here we report a method of phase retrieval that uses a known modulation of the sample exit-wave. This coherent modulation imaging (CMI) method removes inherent ambiguities of CDI and uses a reliable, rapidly converging iterative algorithm involving three planes. It works formore » extended samples, does not require tight support for convergence, and relaxes dynamic range requirements on the detector. CMI provides a robust method for imaging in materials and biological science, while its single-shot capability will benefit the investigation of dynamical processes with pulsed sources, such as X-ray free electron laser.« less
Two improved coherent optical feedback systems for optical information processing
NASA Technical Reports Server (NTRS)
Lee, S. H.; Bartholomew, B.; Cederquist, J.
1976-01-01
Coherent optical feedback systems are Fabry-Perot interferometers modified to perform optical information processing. Two new systems based on plane parallel and confocal Fabry-Perot interferometers are introduced. The plane parallel system can be used for contrast control, intensity level selection, and image thresholding. The confocal system can be used for image restoration and solving partial differential equations. These devices are simpler and less expensive than previous systems. Experimental results are presented to demonstrate their potential for optical information processing.
NASA Astrophysics Data System (ADS)
Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe
2015-08-01
FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.
Normalized Temperature Contrast Processing in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mengel, S.K.; Morrison, D.B.
1985-01-01
Consideration is given to global biogeochemical issues, image processing, remote sensing of tropical environments, global processes, geology, landcover hydrology, and ecosystems modeling. Topics discussed include multisensor remote sensing strategies, geographic information systems, radars, and agricultural remote sensing. Papers are presented on fast feature extraction; a computational approach for adjusting TM imagery terrain distortions; the segmentation of a textured image by a maximum likelihood classifier; analysis of MSS Landsat data; sun angle and background effects on spectral response of simulated forest canopies; an integrated approach for vegetation/landcover mapping with digital Landsat images; geological and geomorphological studies using an image processing technique;more » and wavelength intensity indices in relation to tree conditions and leaf-nutrient content.« less
Anti-nuclear antibody screening using HEp-2 cells.
Buchner, Carol; Bryant, Cassandra; Eslami, Anna; Lakos, Gabriella
2014-06-23
The American College of Rheumatology position statement on ANA testing stipulates the use of IIF as the gold standard method for ANA screening(1). Although IIF is an excellent screening test in expert hands, the technical difficulties of processing and reading IIF slides--such as the labor intensive slide processing, manual reading, the need for experienced, trained technologists and the use of dark room--make the IIF method difficult to fit in the workflow of modern, automated laboratories. The first and crucial step towards high quality ANA screening is careful slide processing. This procedure is labor intensive, and requires full understanding of the process, as well as attention to details and experience. Slide reading is performed by fluorescent microscopy in dark rooms, and is done by trained technologists who are familiar with the various patterns, in the context of cell cycle and the morphology of interphase and dividing cells. Provided that IIF is the first line screening tool for SARD, understanding the steps to correctly perform this technique is critical. Recently, digital imaging systems have been developed for the automated reading of IIF slides. These systems, such as the NOVA View Automated Fluorescent Microscope, are designed to streamline the routine IIF workflow. NOVA View acquires and stores high resolution digital images of the wells, thereby separating image acquisition from interpretation; images are viewed an interpreted on high resolution computer monitors. It stores images for future reference and supports the operator's interpretation by providing fluorescent light intensity data on the images. It also preliminarily categorizes results as positive or negative, and provides pattern recognition for positive samples. In summary, it eliminates the need for darkroom, and automates and streamlines the IIF reading/interpretation workflow. Most importantly, it increases consistency between readers and readings. Moreover, with the use of barcoded slides, transcription errors are eliminated by providing sample traceability and positive patient identification. This results in increased patient data integrity and safety. The overall goal of this video is to demonstrate the IIF procedure, including slide processing, identification of common IIF patterns, and the introduction of new advancements to simplify and harmonize this technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Willse, Alan R.
The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.
End-to-end learning for digital hologram reconstruction
NASA Astrophysics Data System (ADS)
Xu, Zhimin; Zuo, Si; Lam, Edmund Y.
2018-02-01
Digital holography is a well-known method to perform three-dimensional imaging by recording the light wavefront information originating from the object. Not only the intensity, but also the phase distribution of the wavefront can then be computed from the recorded hologram in the numerical reconstruction process. However, the reconstructions via the traditional methods suffer from various artifacts caused by twin-image, zero-order term, and noise from image sensors. Here we demonstrate that an end-to-end deep neural network (DNN) can learn to perform both intensity and phase recovery directly from an intensity-only hologram. We experimentally show that the artifacts can be effectively suppressed. Meanwhile, our network doesn't need any preprocessing for initialization, and is comparably fast to train and test, in comparison with the recently published learning-based method. In addition, we validate that the performance improvement can be achieved by introducing a prior on sparsity.
Semiautomatic Segmentation of Glioma on Mobile Devices.
Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun
2017-01-01
Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.
Sevrain, David; Dubreuil, Matthieu; Dolman, Grace Elizabeth; Zaitoun, Abed; Irving, William; Guha, Indra Neil; Odin, Christophe; Le Grand, Yann
2015-01-01
In this paper we analyze a fibrosis scoring method based on measurement of the fibrillar collagen area from second harmonic generation (SHG) microscopy images of unstained histological slices from human liver biopsies. The study is conducted on a cohort of one hundred chronic hepatitis C patients with intermediate to strong Metavir and Ishak stages of liver fibrosis. We highlight a key parameter of our scoring method to discriminate between high and low fibrosis stages. Moreover, according to the intensity histograms of the SHG images and simple mathematical arguments, we show that our area-based method is equivalent to an intensity-based method, despite saturation of the images. Finally we propose an improvement of our scoring method using very simple image processing tools. PMID:25909005
Sevrain, David; Dubreuil, Matthieu; Dolman, Grace Elizabeth; Zaitoun, Abed; Irving, William; Guha, Indra Neil; Odin, Christophe; Le Grand, Yann
2015-04-01
In this paper we analyze a fibrosis scoring method based on measurement of the fibrillar collagen area from second harmonic generation (SHG) microscopy images of unstained histological slices from human liver biopsies. The study is conducted on a cohort of one hundred chronic hepatitis C patients with intermediate to strong Metavir and Ishak stages of liver fibrosis. We highlight a key parameter of our scoring method to discriminate between high and low fibrosis stages. Moreover, according to the intensity histograms of the SHG images and simple mathematical arguments, we show that our area-based method is equivalent to an intensity-based method, despite saturation of the images. Finally we propose an improvement of our scoring method using very simple image processing tools.
Processing Near-Infrared Imagery of the Orion Heatshield During EFT-1 Hypersonic Reentry
NASA Technical Reports Server (NTRS)
Spisz, Thomas S.; Taylor, Jeff C.; Gibson, David M.; Kennerly, Steve; Osei-Wusu, Kwame; Horvath, Thomas J.; Schwartz, Richard J.; Tack, Steven; Bush, Brett C.; Oliver, A. Brandon
2016-01-01
The Scientifically Calibrated In-Flight Imagery (SCIFLI) team captured high-resolution, calibrated, near-infrared imagery of the Orion capsule during atmospheric reentry of the EFT-1 mission. A US Navy NP-3D aircraft equipped with a multi-band optical sensor package, referred to as Cast Glance, acquired imagery of the Orion capsule's heatshield during a period when Orion was slowing from approximately Mach 10 to Mach 7. The line-of-sight distance ranged from approximately 65 to 40 nmi. Global surface temperatures of the capsule's thermal heatshield derived from the near-infrared intensity measurements complemented the in-depth (embedded) thermocouple measurements. Moreover, these derived surface temperatures are essential to the assessment of the thermocouples' reliance on inverse heat transfer methods and material response codes to infer the surface temperature from the in-depth measurements. The paper describes the image processing challenges associated with a manually-tracked, high-angular rate air-to-air observation. Issues included management of significant frame-to-frame motions due to both tracking jerk and jitter as well as distortions due to atmospheric effects. Corrections for changing sky backgrounds (including some cirrus clouds), atmospheric attenuation, and target orientations and ranges also had to be made. The image processing goal is to reduce the detrimental effects due to motion (both sensor and capsule), vibration (jitter), and atmospherics for image quality improvement, without compromising the quantitative integrity of the data, especially local intensity (temperature) variations. The paper will detail the approach of selecting and utilizing only the highest quality images, registering several co-temporal image frames to a single image frame to the extent frame-to-frame distortions would allow, and then co-adding the registered frames to improve image quality and reduce noise. Using preflight calibration data, the registered and averaged infrared intensity images were converted to surface temperatures on the Orion capsule's heatshield. Temperature uncertainties will be discussed relative to uncertainties of surface emissivity and atmospheric transmission loss. Comparison of limited onboard surface thermocouple data to the image derived surface temperature will be presented.
Complex Spiral Structure in the HD 100546 Transitional Disk as Revealed by GPI and MagAO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follette, Katherine B.; Macintosh, Bruce; Mullen, Wyatt
We present optical and near-infrared high-contrast images of the transitional disk HD 100546 taken with the Magellan Adaptive Optics system (MagAO) and the Gemini Planet Imager (GPI). GPI data include both polarized intensity and total intensity imagery, and MagAO data are taken in Simultaneous Differential Imaging mode at H α . The new GPI H -band total intensity data represent a significant enhancement in sensitivity and field rotation compared to previous data sets and enable a detailed exploration of substructure in the disk. The data are processed with a variety of differential imaging techniques (polarized, angular, reference, and simultaneous differentialmore » imaging) in an attempt to identify the disk structures that are most consistent across wavelengths, processing techniques, and algorithmic parameters. The inner disk cavity at 15 au is clearly resolved in multiple data sets, as are a variety of spiral features. While the cavity and spiral structures are identified at levels significantly distinct from the neighboring regions of the disk under several algorithms and with a range of algorithmic parameters, emission at the location of HD 100546 “ c ” varies from point-like under aggressive algorithmic parameters to a smooth continuous structure with conservative parameters, and is consistent with disk emission. Features identified in the HD 100546 disk bear qualitative similarity to computational models of a moderately inclined two-armed spiral disk, where projection effects and wrapping of the spiral arms around the star result in a number of truncated spiral features in forward-modeled images.« less
In vivo multiphoton tomography and fluorescence lifetime imaging of human brain tumor tissue.
Kantelhardt, Sven R; Kalasauskas, Darius; König, Karsten; Kim, Ella; Weinigel, Martin; Uchugonova, Aisada; Giese, Alf
2016-05-01
High resolution multiphoton tomography and fluorescence lifetime imaging differentiates glioma from adjacent brain in native tissue samples ex vivo. Presently, multiphoton tomography is applied in clinical dermatology and experimentally. We here present the first application of multiphoton and fluorescence lifetime imaging for in vivo imaging on humans during a neurosurgical procedure. We used a MPTflex™ Multiphoton Laser Tomograph (JenLab, Germany). We examined cultured glioma cells in an orthotopic mouse tumor model and native human tissue samples. Finally the multiphoton tomograph was applied to provide optical biopsies during resection of a clinical case of glioblastoma. All tissues imaged by multiphoton tomography were sampled and processed for conventional histopathology. The multiphoton tomograph allowed fluorescence intensity- and fluorescence lifetime imaging with submicron spatial resolution and 200 picosecond temporal resolution. Morphological fluorescence intensity imaging and fluorescence lifetime imaging of tumor-bearing mouse brains and native human tissue samples clearly differentiated tumor and adjacent brain tissue. Intraoperative imaging was found to be technically feasible. Intraoperative image quality was comparable to ex vivo examinations. To our knowledge we here present the first intraoperative application of high resolution multiphoton tomography and fluorescence lifetime imaging of human brain tumors in situ. It allowed in vivo identification and determination of cell density of tumor tissue on a cellular and subcellular level within seconds. The technology shows the potential of rapid intraoperative identification of native glioma tissue without need for tissue processing or staining.
Chen, Yunjie; Zhao, Bo; Zhang, Jianwei; Zheng, Yuhui
2014-09-01
Accurate segmentation of magnetic resonance (MR) images remains challenging mainly due to the intensity inhomogeneity, which is also commonly known as bias field. Recently active contour models with geometric information constraint have been applied, however, most of them deal with the bias field by using a necessary pre-processing step before segmentation of MR data. This paper presents a novel automatic variational method, which can segment brain MR images meanwhile correcting the bias field when segmenting images with high intensity inhomogeneities. We first define a function for clustering the image pixels in a smaller neighborhood. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. In order to reduce the effect of the noise, the local intensity variations are described by the Gaussian distributions with different means and variances. Then, the objective functions are integrated over the entire domain. In order to obtain the global optimal and make the results independent of the initialization of the algorithm, we reconstructed the energy function to be convex and calculated it by using the Split Bregman theory. A salient advantage of our method is that its result is independent of initialization, which allows robust and fully automated application. Our method is able to estimate the bias of quite general profiles, even in 7T MR images. Moreover, our model can also distinguish regions with similar intensity distribution with different variances. The proposed method has been rigorously validated with images acquired on variety of imaging modalities with promising results. Copyright © 2014 Elsevier Inc. All rights reserved.
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
Advances in medical image computing.
Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P
2009-01-01
Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.
Double-image storage optimized by cross-phase modulation in a cold atomic system
NASA Astrophysics Data System (ADS)
Qiu, Tianhui; Xie, Min
2017-09-01
A tripod-type cold atomic system driven by double-probe fields and a coupling field is explored to store double images based on the electromagnetically induced transparency (EIT). During the storage time, an intensity-dependent signal field is applied further to extend the system with the fifth level involved, then the cross-phase modulation is introduced for coherently manipulating the stored images. Both analytical analysis and numerical simulation clearly demonstrate a tunable phase shift with low nonlinear absorption can be imprinted on the stored images, which effectively can improve the visibility of the reconstructed images. The phase shift and the energy retrieving rate of the probe fields are immune to the coupling intensity and the atomic optical density. The proposed scheme can easily be extended to the simultaneous storage of multiple images. This work may be exploited toward the end of EIT-based multiple-image storage devices for all-optical classical and quantum information processings.
Processing the image gradient field using a topographic primal sketch approach.
Gambaruto, A M
2015-03-01
The spatial derivatives of the image intensity provide topographic information that may be used to identify and segment objects. The accurate computation of the derivatives is often hampered in medical images by the presence of noise and a limited resolution. This paper focuses on accurate computation of spatial derivatives and their subsequent use to process an image gradient field directly, from which an image with improved characteristics can be reconstructed. The improvements include noise reduction, contrast enhancement, thinning object contours and the preservation of edges. Processing the gradient field directly instead of the image is shown to have numerous benefits. The approach is developed such that the steps are modular, allowing the overall method to be improved and possibly tailored to different applications. As presented, the approach relies on a topographic representation and primal sketch of an image. Comparisons with existing image processing methods on a synthetic image and different medical images show improved results and accuracy in segmentation. Here, the focus is on objects with low spatial resolution, which is often the case in medical images. The methods developed show the importance of improved accuracy in derivative calculation and the potential in processing the image gradient field directly. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
An algorithm for pavement crack detection based on multiscale space
NASA Astrophysics Data System (ADS)
Liu, Xiang-long; Li, Qing-quan
2006-10-01
Conventional human-visual and manual field pavement crack detection method and approaches are very costly, time-consuming, dangerous, labor-intensive and subjective. They possess various drawbacks such as having a high degree of variability of the measure results, being unable to provide meaningful quantitative information and almost always leading to inconsistencies in crack details over space and across evaluation, and with long-periodic measurement. With the development of the public transportation and the growth of the Material Flow System, the conventional method can far from meet the demands of it, thereby, the automatic pavement state data gathering and data analyzing system come to the focus of the vocation's attention, and developments in computer technology, digital image acquisition, image processing and multi-sensors technology made the system possible, but the complexity of the image processing always made the data processing and data analyzing come to the bottle-neck of the whole system. According to the above description, a robust and high-efficient parallel pavement crack detection algorithm based on Multi-Scale Space is proposed in this paper. The proposed method is based on the facts that: (1) the crack pixels in pavement images are darker than their surroundings and continuous; (2) the threshold values of gray-level pavement images are strongly related with the mean value and standard deviation of the pixel-grey intensities. The Multi-Scale Space method is used to improve the data processing speed and minimize the effectiveness caused by image noise. Experiment results demonstrate that the advantages are remarkable: (1) it can correctly discover tiny cracks, even from very noise pavement image; (2) the efficiency and accuracy of the proposed algorithm are superior; (3) its application-dependent nature can simplify the design of the entire system.
Practical implementation of tetrahedral mesh reconstruction in emission tomography
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2014-01-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373
Practical implementation of tetrahedral mesh reconstruction in emission tomography
NASA Astrophysics Data System (ADS)
Boutchko, R.; Sitek, A.; Gullberg, G. T.
2013-05-01
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
Measurement of glucose concentration by image processing of thin film slides
NASA Astrophysics Data System (ADS)
Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David
2012-02-01
Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Liping; Zhu, Fulong, E-mail: zhufulong@hust.edu.cn; Duan, Ke
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of opticalmore » devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.« less
Ultrasonic power measurement system based on acousto-optic interaction.
He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan
2016-05-01
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.
Ultrasonic power measurement system based on acousto-optic interaction
NASA Astrophysics Data System (ADS)
He, Liping; Zhu, Fulong; Chen, Yanming; Duan, Ke; Lin, Xinxin; Pan, Yongjun; Tao, Jiaquan
2016-05-01
Ultrasonic waves are widely used, with applications including the medical, military, and chemical fields. However, there are currently no effective methods for ultrasonic power measurement. Previously, ultrasonic power measurement has been reliant on mechanical methods such as hydrophones and radiation force balances. This paper deals with ultrasonic power measurement based on an unconventional method: acousto-optic interaction. Compared with mechanical methods, the optical method has a greater ability to resist interference and also has reduced environmental requirements. Therefore, this paper begins with an experimental determination of the acoustic power in water contained in a glass tank using a set of optical devices. Because the light intensity of the diffraction image generated by acousto-optic interaction contains the required ultrasonic power information, specific software was written to extract the light intensity information from the image through a combination of filtering, binarization, contour extraction, and other image processing operations. The power value can then be obtained rapidly by processing the diffraction image using a computer. The results of this work show that the optical method offers advantages that include accuracy, speed, and a noncontact measurement method.
NASA Astrophysics Data System (ADS)
Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu
2006-03-01
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
Whole brain myelin mapping using T1- and T2-weighted MR imaging data
Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante
2014-01-01
Despite recent advancements in MR imaging, non-invasive mapping of myelin in the brain still remains an open issue. Here we attempted to provide a potential solution. Specifically, we developed a processing workflow based on T1-w and T2-w MR data to generate an optimized myelin enhanced contrast image. The workflow allows whole brain mapping using the T1-w/T2-w technique, which was originally introduced as a non-invasive method for assessing cortical myelin content. The hallmark of our approach is a retrospective calibration algorithm, applied to bias-corrected T1-w and T2-w images, that relies on image intensities outside the brain. This permits standardizing the intensity histogram of the ratio image, thereby allowing for across-subject statistical analyses. Quantitative comparisons of image histograms within and across different datasets confirmed the effectiveness of our normalization procedure. Not only did the calibrated T1-w/T2-w images exhibit a comparable intensity range, but also the shape of the intensity histograms was largely corresponding. We also assessed the reliability and specificity of the ratio image compared to other MR-based techniques, such as magnetization transfer ratio (MTR), fractional anisotropy (FA), and fluid-attenuated inversion recovery (FLAIR). With respect to these other techniques, T1-w/T2-w had consistently high values, as well as low inter-subject variability, in brain structures where myelin is most abundant. Overall, our results suggested that the T1-w/T2-w technique may be a valid tool supporting the non-invasive mapping of myelin in the brain. Therefore, it might find important applications in the study of brain development, aging and disease. PMID:25228871
Analysis of x-ray tomography data of an extruded low density styrenic foam: an image analysis study
NASA Astrophysics Data System (ADS)
Lin, Jui-Ching; Heeschen, William
2016-10-01
Extruded styrenic foams are low density foams that are widely used for thermal insulation. It is difficult to precisely characterize the structure of the cells in low density foams by traditional cross-section viewing due to the frailty of the walls of the cells. X-ray computed tomography (CT) is a non-destructive, three dimensional structure characterization technique that has great potential for structure characterization of styrenic foams. Unfortunately the intrinsic artifacts of the data and the artifacts generated during image reconstruction are often comparable in size and shape to the thin walls of the foam, making robust and reliable analysis of cell sizes challenging. We explored three different image processing methods to clean up artifacts in the reconstructed images, thus allowing quantitative three dimensional determination of cell size in a low density styrenic foam. Three image processing approaches - an intensity based approach, an intensity variance based approach, and a machine learning based approach - are explored in this study, and the machine learning image feature classification method was shown to be the best. Individual cells are segmented within the images after the images were cleaned up using the three different methods and the cell sizes are measured and compared in the study. Although the collected data with the image analysis methods together did not yield enough measurements for a good statistic of the measurement of cell sizes, the problem can be resolved by measuring multiple samples or increasing imaging field of view.
Mulkern, Robert V; Balasubramanian, Mukund; Orbach, Darren B; Mitsouras, Dimitrios; Haker, Steven J
2013-04-01
Among the multiple sequences available for functional magnetic resonance imaging (fMRI), the Steady State Free Precession (SSFP) sequence offers the highest signal-to-noise ratio (SNR) per unit time as well as distortion free images not feasible with the more commonly employed single-shot echo planar imaging (EPI) approaches. Signal changes occurring with activation in SSFP sequences reflect underlying changes in both irreversible and reversible transverse relaxation processes. The latter are characterized by changes in the central frequencies and widths of the inherent frequency distribution present within a voxel. In this work, the well-known frequency response of the SSFP signal intensity is generalized to include the widths and central frequencies of some common frequency distributions on SSFP signal intensities. The approach, using a previously unnoted series expansion, allows for a separation of reversible from irreversible transverse relaxation effects on SSFP signal intensity changes. The formalism described here should prove useful for identifying and modeling mechanisms associated with SSFP signal changes accompanying neural activation. Copyright © 2013 Elsevier Inc. All rights reserved.
An Example-Based Brain MRI Simulation Framework.
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L
2015-02-21
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
A simple and robust method for artifacts correction on X-ray microtomography images
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke
2017-04-01
X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
Guo, Kun; Soornack, Yoshi; Settle, Rebecca
2018-03-05
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.
PACS in an intensive care unit: results from a randomized controlled trial
NASA Astrophysics Data System (ADS)
Bryan, Stirling; Weatherburn, Gwyneth C.; Watkins, Jessamy; Walker, Samantha; Wright, Carl; Waters, Brian; Evans, Jeff; Buxton, Martin J.
1998-07-01
The objective of this research was to assess the costs and benefits associated with the introduction of a small PACS system into an intensive care unit (ICU) at a district general hospital in north Wales. The research design adopted for this study was a single center randomized controlled trial (RCT). Patients were randomly allocated either to a trial arm where their x-ray imaging was solely film-based or to a trial arm where their x-ray imaging was solely PACS based. Benefit measures included examination-based process measures, such as image turn-round time, radiation dose and image unavailability; and patient-related process measures, which included adverse events and length of stay. The measurement of costs focused on additional 'radiological' costs and the costs of patient management. The study recruited 600 patients. The key findings from this study were that the installation of PACS was associated with important benefits in terms of image availability, and important costs in both monetary and radiation dose terms. PACS-related improvements in terms of more timely 'clinical actions' were not found. However, the qualitative aspect of the research found that clinicians were advocates of the technology and believed that an important benefit of PACS related to improved image availability.
Interferometric synthetic aperture radar: Building tomorrow's tools today
Lu, Zhong
2006-01-01
A synthetic aperture radar (SAR) system transmits electromagnetic (EM) waves at a wavelength that can range from a few millimeters to tens of centimeters. The radar wave propagates through the atmosphere and interacts with the Earth’s surface. Part of the energy is reflected back to the SAR system and recorded. Using a sophisticated image processing technique, called SAR processing (Curlander and McDonough, 1991), both the intensity and phase of the reflected (or backscattered) signal of each ground resolution element (a few meters to tens of meters) can be calculated in the form of a complex-valued SAR image representing the reflectivity of the ground surface. The amplitude or intensity of the SAR image is determined primarily by terrain slope, surface roughness, and dielectric constants, whereas the phase of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets, slowing of the signal by the atmosphere, and the interaction of EM waves with ground surface. Interferometric SAR (InSAR) imaging, a recently developed remote sensing technique, utilizes the interaction of EM waves, referred to as interference, to measure precise distances. Very simply, InSAR involves the use of two or more SAR images of the same area to extract landscape topography and its deformation patterns.
A Resolved Near-Infrared Image of the Inner Cavity in the GM Aur Transitional Disk
NASA Technical Reports Server (NTRS)
Oh, Daehyeon; Hashimoto, Jun; Carson, Joseph C.; Janson, Markus; Kwon, Jungmi; Nakagawa, Takao; Mayama, Satoshi; Uyama, Taichi; Grady, Carol A.; McElwain, Michael W.
2016-01-01
We present high-contrast H-band polarized intensity (PI) images of the transitional disk around the young solar like star GM Aur. The near-infrared direct imaging of the disk was derived by polarimetric differential imaging using the Subaru 8.2 m Telescope and HiCIAO. An angular resolution and an inner working angle of 0 07 and radius approximately 0 05, respectively, were obtained. We clearly resolved a large inner cavity, with a measured radius of 18+/ 2 au, which is smaller than that of a submillimeter interferometric image (28 au). This discrepancy in the cavity radii at near-infrared and submillimeter wavelengths may be caused by a 34M(sub Jup) planet about 20 au away from the star, near the edge of the cavity. The presence of a near-infrared inner cavity is a strong constraint on hypotheses for inner cavity formation in a transitional disk. A dust filtration mechanism has been proposed to explain the large cavity in the submillimeter image, but our results suggest that this mechanism must be combined with an additional process. We found that the PI slope of the outer disk is significantly different from the intensity slope obtained from HSTNICMOS, and this difference may indicate the grain growth process in the disk.
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
NASA Astrophysics Data System (ADS)
Jagadale, Basavaraj N.; Udupa, Jayaram K.; Tong, Yubing; Wu, Caiyun; McDonough, Joseph; Torigian, Drew A.; Campbell, Robert M.
2018-02-01
General surgeons, orthopedists, and pulmonologists individually treat patients with thoracic insufficiency syndrome (TIS). The benefits of growth-sparing procedures such as Vertical Expandable Prosthetic Titanium Rib (VEPTR)insertionfor treating patients with TIS have been demonstrated. However, at present there is no objective assessment metricto examine different thoracic structural components individually as to their roles in the syndrome, in contributing to dynamics and function, and in influencing treatment outcome. Using thoracic dynamic MRI (dMRI), we have been developing a methodology to overcome this problem. In this paper, we extend this methodology from our previous structural analysis approaches to examining lung tissue properties. We process the T2-weighted dMRI images through a series of steps involving 4D image construction of the acquired dMRI images, intensity non-uniformity correction and standardization of the 4D image, lung segmentation, and estimation of the parameters describing lung tissue intensity distributions in the 4D image. Based on pre- and post-operative dMRI data sets from 25 TIS patients (predominantly neuromuscular and congenital conditions), we demonstrate how lung tissue can be characterized by the estimated distribution parameters. Our results show that standardized T2-weighted image intensity values decrease from the pre- to post-operative condition, likely reflecting improved lung aeration post-operatively. In both pre- and post-operative conditions, the intensity values decrease also from end-expiration to end-inspiration, supporting the basic premise of our results.
Automation of Cassini Support Imaging Uplink Command Development
NASA Technical Reports Server (NTRS)
Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert
2010-01-01
"Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.
CMOS image sensor with contour enhancement
NASA Astrophysics Data System (ADS)
Meng, Liya; Lai, Xiaofeng; Chen, Kun; Yuan, Xianghui
2010-10-01
Imitating the signal acquisition and processing of vertebrate retina, a CMOS image sensor with bionic pre-processing circuit is designed. Integration of signal-process circuit on-chip can reduce the requirement of bandwidth and precision of the subsequent interface circuit, and simplify the design of the computer-vision system. This signal pre-processing circuit consists of adaptive photoreceptor, spatial filtering resistive network and Op-Amp calculation circuit. The adaptive photoreceptor unit with a dynamic range of approximately 100 dB has a good self-adaptability for the transient changes in light intensity instead of intensity level itself. Spatial low-pass filtering resistive network used to mimic the function of horizontal cell, is composed of the horizontal resistor (HRES) circuit and OTA (Operational Transconductance Amplifier) circuit. HRES circuit, imitating dendrite of the neuron cell, comprises of two series MOS transistors operated in weak inversion region. Appending two diode-connected n-channel transistors to a simple transconductance amplifier forms the OTA Op-Amp circuit, which provides stable bias voltage for the gate of MOS transistors in HRES circuit, while serves as an OTA voltage follower to provide input voltage for the network nodes. The Op-Amp calculation circuit with a simple two-stage Op-Amp achieves the image contour enhancing. By adjusting the bias voltage of the resistive network, the smoothing effect can be tuned to change the effect of image's contour enhancement. Simulations of cell circuit and 16×16 2D circuit array are implemented using CSMC 0.5μm DPTM CMOS process.
Muhogora, Wilbroad E; Msaki, Peter; Padovani, Renato
2015-03-08
The objective of this study was to improve the visibility of anatomical details by applying off-line postimage processing in chest computed radiography (CR). Four spatial domain-based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann-Whitney U-test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005 ≤ p ≤ 0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60 ≤ kVp ≤ 70. However, there was no improvement for images acquired using 102 ≤ kVp ≤ 107 (0.127 ≤ p ≤ 0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists.
Msaki, Peter; Padovani, Renato
2015-01-01
The objective of this study was to improve the visibility of anatomical details by applying off‐line postimage processing in chest computed radiography (CR). Four spatial domain‐based external image processing techniques were developed by using MATLAB software version 7.0.0.19920 (R14) and image processing tools. The developed techniques were implemented to sample images and their visual appearances confirmed by two consultant radiologists to be clinically adequate. The techniques were then applied to 200 chest clinical images and randomized with other 100 images previously processed online. These 300 images were presented to three experienced radiologists for image quality assessment using standard quality criteria. The mean and ranges of the average scores for three radiologists were characterized for each of the developed technique and imaging system. The Mann‐Whitney U‐test was used to test the difference of details visibility between the images processed using each of the developed techniques and the corresponding images processed using default algorithms. The results show that the visibility of anatomical features improved significantly (0.005≤p≤0.02) with combinations of intensity values adjustment and/or spatial linear filtering techniques for images acquired using 60≤kVp≤70. However, there was no improvement for images acquired using 102≤kVp≤107 (0.127≤p≤0.48). In conclusion, the use of external image processing for optimization can be effective in chest CR, but should be implemented in consultations with the radiologists. PACS number: 87.59.−e, 87.59.−B, 87.59.−bd PMID:26103165
Quantitative light-induced fluorescence technology for quantitative evaluation of tooth wear
NASA Astrophysics Data System (ADS)
Kim, Sang-Kyeom; Lee, Hyung-Suk; Park, Seok-Woo; Lee, Eun-Song; de Josselin de Jong, Elbert; Jung, Hoi-In; Kim, Baek-Il
2017-12-01
Various technologies used to objectively determine enamel thickness or dentin exposure have been suggested. However, most methods have clinical limitations. This study was conducted to confirm the potential of quantitative light-induced fluorescence (QLF) using autofluorescence intensity of occlusal surfaces of worn teeth according to enamel grinding depth in vitro. Sixteen permanent premolars were used. Each tooth was gradationally ground down at the occlusal surface in the apical direction. QLF-digital and swept-source optical coherence tomography images were acquired at each grinding depth (in steps of 100 μm). All QLF images were converted to 8-bit grayscale images to calculate the fluorescence intensity. The maximum brightness (MB) values of the same sound regions in grayscale images before (MB) and phased values after (MB) the grinding process were calculated. Finally, 13 samples were evaluated. MB increased over the grinding depth range with a strong correlation (r=0.994, P<0.001). In conclusion, the fluorescence intensity of the teeth and grinding depth was strongly correlated in the QLF images. Therefore, QLF technology may be a useful noninvasive tool used to monitor the progression of tooth wear and to conveniently estimate enamel thickness.
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki
2015-01-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape–location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape–location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. PMID:26277022
Okada, Toshiyuki; Linguraru, Marius George; Hori, Masatoshi; Summers, Ronald M; Tomiyama, Noriyuki; Sato, Yoshinobu
2015-12-01
This paper addresses the automated segmentation of multiple organs in upper abdominal computed tomography (CT) data. The aim of our study is to develop methods to effectively construct the conditional priors and use their prediction power for more accurate segmentation as well as easy adaptation to various imaging conditions in CT images, as observed in clinical practice. We propose a general framework of multi-organ segmentation which effectively incorporates interrelations among multiple organs and easily adapts to various imaging conditions without the need for supervised intensity information. The features of the framework are as follows: (1) A method for modeling conditional shape and location (shape-location) priors, which we call prediction-based priors, is developed to derive accurate priors specific to each subject, which enables the estimation of intensity priors without the need for supervised intensity information. (2) Organ correlation graph is introduced, which defines how the conditional priors are constructed and segmentation processes of multiple organs are executed. In our framework, predictor organs, whose segmentation is sufficiently accurate by using conventional single-organ segmentation methods, are pre-segmented, and the remaining organs are hierarchically segmented using conditional shape-location priors. The proposed framework was evaluated through the segmentation of eight abdominal organs (liver, spleen, left and right kidneys, pancreas, gallbladder, aorta, and inferior vena cava) from 134 CT data from 86 patients obtained under six imaging conditions at two hospitals. The experimental results show the effectiveness of the proposed prediction-based priors and the applicability to various imaging conditions without the need for supervised intensity information. Average Dice coefficients for the liver, spleen, and kidneys were more than 92%, and were around 73% and 67% for the pancreas and gallbladder, respectively. Copyright © 2015 Elsevier B.V. All rights reserved.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data
NASA Astrophysics Data System (ADS)
Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.
2018-04-01
With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.
Advanced processing for high-bandwidth sensor systems
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.
2000-11-01
Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.
Viddeleer, Alain R; Sijens, Paul E; van Ooijen, Peter M A; Kuypers, Paul D L; Hovius, Steven E R; Oudkerk, Matthijs
2009-08-01
Nerve regeneration could be monitored by comparing MRI image intensities in time, as denervated muscles display increased signal intensity in STIR sequences. In this study long-term reproducibility of STIR image intensity was assessed under clinical conditions and the required image intensity nonuniformity correction was improved by using phantom scans obtained at multiple positions. Three-dimensional image intensity nonuniformity was investigated in phantom scans. Next, over a three-year period, 190 clinical STIR hand scans were obtained using a standardized acquisition protocol, and corrected for intensity nonuniformity by using the results of phantom scanning. The results of correction with 1, 3, and 11 phantom scans were compared. The image intensities in calibration tubes close to the hands were measured every time to determine the reproducibility of our method. With calibration, the reproducibility of STIR image intensity improved from 7.8 to 6.4%. Image intensity nonuniformity correction with 11 phantom scans gave significantly better results than correction with 1 or 3 scans. The image intensities in clinical STIR images acquired at different times can be compared directly, provided that the acquisition protocol is standardized and that nonuniformity correction is applied. Nonuniformity correction is preferably based on multiple phantom scans.
Osbourn, Gordon C.
1996-01-01
The shadow contrast sensitivity of the human vision system is simulated by configuring information obtained from an image sensor so that the information may be evaluated with multiple pixel widths in order to produce a machine vision system able to distinguish between shadow edges and abrupt object edges. A second difference of the image intensity for each line of the image is developed and this second difference is used to screen out high frequency noise contributions from the final edge detection signals. These edge detection signals are constructed from first differences of the image intensity where the screening conditions are satisfied. The positional coincidence of oppositely signed maxima in the first difference signal taken from the right and the second difference signal taken from the left is used to detect the presence of an object edge. Alternatively, the effective number of responding operators (ENRO) may be utilized to determine the presence of object edges.
Algorithm for Detecting a Bright Spot in an Image
NASA Technical Reports Server (NTRS)
2009-01-01
An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)
Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl
2016-08-01
The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation
NASA Astrophysics Data System (ADS)
Tobon-Gomez, Catalina; Sukno, Federico M.; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F.
2012-07-01
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18% LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.
Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F
2012-07-07
Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.
Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.
Shaheen, Anjuman; Rajpoot, Kashif
2015-08-01
Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
MOPEX: a software package for astronomical image processing and visualization
NASA Astrophysics Data System (ADS)
Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley
2006-06-01
We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.
ERIC Educational Resources Information Center
Blackman, Graham A.; Hall, Deborah A.
2011-01-01
Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…
Dehydrogenation involved Coulomb explosion of molecular C2H4FBr in an intense laser field
NASA Astrophysics Data System (ADS)
Pei, Minjie; Yang, Yan; Zhang, Jian; Sun, Zhenrong
2018-04-01
The dissociative double ionization (DDI) of molecular 1-fluo-2-bromoethane (FBE) in an intense laser field has been investigated by dc-slice imaging technology. The DDI channels involved with dehydrogenation are revealed and it's believed both the charge distribution and the bound character of real potential energy surfaces of parent ions play important roles in the dissociation process. The relationship between the potential energy surfaces of the precursor species and the photofragment ejection angles are also discussed and analyzed. Furthermore, the competition between the DDI channels has been studied and the Csbnd C bond cleavages dominate the DDI process at relative higher laser intensity.
Concentration Measurements in a Cold Flow Model Annular Combustor Using Laser Induced Fluorescence
NASA Technical Reports Server (NTRS)
Morgan, Douglas C.
1996-01-01
A nonintrusive concentration measurement method is developed for determining the concentration distribution in a complex flow field. The measurement method consists of marking a liquid flow with a water soluble fluorescent dye. The dye is excited by a two dimensional sheet of laser light. The fluorescent intensity is shown to be proportional to the relative concentration level. The fluorescent field is recorded on a video cassette recorder through a video camera. The recorded images are analyzed with image processing hardware and software to obtain intensity levels. Mean and root mean square (rms) values are calculated from these intensity levels. The method is tested on a single round turbulent jet because previous concentration measurements have been made on this configuration by other investigators. The previous results were used to comparison to qualify the current method. These comparisons showed that this method provides satisfactory results. 'Me concentration measurement system was used to measure the concentrations in the complex flow field of a model gas turbine annular combustor. The model annular combustor consists of opposing primary jets and an annular jet which discharges perpendicular to the primary jets. The mixing between the different jet flows can be visualized from the calculated mean and rms profiles. Concentration field visualization images obtained from the processing provide further qualitative information about the flow field.
NASA Astrophysics Data System (ADS)
Suryani, Esti; Wiharto; Palgunadi, Sarngadi; Nurcahya Pradana, TP
2017-01-01
This study uses image processing to analyze white blood cell with leukemia indicated that includes the identification, analysis of shapes and sizes, as well as white blood cell count indicated the symptoms of leukemia. A case study in this research was blood cells, from the type of leukemia Acute Myelogenous Leukemia (AML), M2 and M3 in particular. Image processing operations used for segmentation by utilizing the color conversion from RGB (Red, Green dab Blue) to obtain white blood cell candidates. Furthermore, the white blood cells candidates are separated by other cells with active contour without edge. WBC (White Blood Cell) results still have intersected or overlap condition. Watershed distance transform method can separate overlap of WBC. Furthermore, the separation of the nucleus from the cytoplasm using the HSI (Hue Saturation Intensity). The further characteristic extraction process is done by calculating the area WBC, WBC edge, roundness, the ratio of the nucleus, the mean and standard deviation of pixel intensities. The feature extraction results are used for training and testing in determining the classification of AML: M2 and M3 by using the momentum backpropagation algorithm. The classification process is done by testing the numeric data input from the feature extraction results that have been entered in the database. K-Fold validation is used to divide the amount of training data and to test the classification of AML M2 and M3. The experiment results of eight images trials, the result, was 94.285% per cell accuracy and 75% per image accuracy
Matrix decomposition graphics processing unit solver for Poisson image editing
NASA Astrophysics Data System (ADS)
Lei, Zhao; Wei, Li
2012-10-01
In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.
Rawashdeh, Nathir A.
2018-01-01
Visual inspection through image processing of welding and shot-peened surfaces is necessary to overcome equipment limitations, avoid measurement errors, and accelerate processing to gain certain surface properties such as surface roughness. Therefore, it is important to design an algorithm to quantify surface properties, which enables us to overcome the aforementioned limitations. In this study, a proposed systematic algorithm is utilized to generate and compare the surface roughness of Tungsten Inert Gas (TIG) welded aluminum 6061-T6 alloy treated by two levels of shot-peening, high-intensity and low-intensity. This project is industrial in nature, and the proposed solution was originally requested by local industry to overcome equipment capabilities and limitations. In particular, surface roughness measurements are usually only possible on flat surfaces but not on other areas treated by shot-peening after welding, as in the heat-affected zone and weld beads. Therefore, those critical areas are outside of the measurement limitations. Using the proposed technique, the surface roughness measurements were possible to obtain for weld beads, high-intensity and low-intensity shot-peened surfaces. In addition, a 3D surface topography was generated and dimple size distributions were calculated for the three tested scenarios: control sample (TIG-welded only), high-intensity shot-peened, and low-intensity shot-peened TIG-welded Al6065-T6 samples. Finally, cross-sectional hardness profiles were measured for the three scenarios; in all scenarios, lower hardness measurements were obtained compared to the base metal alloy in the heat-affected zone and in the weld beads even after shot-peening treatments. PMID:29748520
Atieh, Anas M; Rawashdeh, Nathir A; AlHazaa, Abdulaziz N
2018-05-10
Visual inspection through image processing of welding and shot-peened surfaces is necessary to overcome equipment limitations, avoid measurement errors, and accelerate processing to gain certain surface properties such as surface roughness. Therefore, it is important to design an algorithm to quantify surface properties, which enables us to overcome the aforementioned limitations. In this study, a proposed systematic algorithm is utilized to generate and compare the surface roughness of Tungsten Inert Gas (TIG) welded aluminum 6061-T6 alloy treated by two levels of shot-peening, high-intensity and low-intensity. This project is industrial in nature, and the proposed solution was originally requested by local industry to overcome equipment capabilities and limitations. In particular, surface roughness measurements are usually only possible on flat surfaces but not on other areas treated by shot-peening after welding, as in the heat-affected zone and weld beads. Therefore, those critical areas are outside of the measurement limitations. Using the proposed technique, the surface roughness measurements were possible to obtain for weld beads, high-intensity and low-intensity shot-peened surfaces. In addition, a 3D surface topography was generated and dimple size distributions were calculated for the three tested scenarios: control sample (TIG-welded only), high-intensity shot-peened, and low-intensity shot-peened TIG-welded Al6065-T6 samples. Finally, cross-sectional hardness profiles were measured for the three scenarios; in all scenarios, lower hardness measurements were obtained compared to the base metal alloy in the heat-affected zone and in the weld beads even after shot-peening treatments.
Effect of color coding and subtraction on the accuracy of contrast echocardiography
NASA Technical Reports Server (NTRS)
Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.
1999-01-01
BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy and specificity of MCE for detection of perfusion defects.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Jupiter's Auroras Acceleration Processes
2017-09-06
This image, created with data from Juno's Ultraviolet Imaging Spectrometer (UVS), marks the path of Juno's readings of Jupiter's auroras, highlighting the electron measurements that show the discovery of the so-called discrete auroral acceleration processes indicated by the "inverted Vs" in the lower panel (Figure 1). This signature points to powerful magnetic-field-aligned electric potentials that accelerate electrons toward the atmosphere to energies that are far greater than what drive the most intense aurora at Earth. Scientists are looking into why the same processes are not the main factor in Jupiter's most powerful auroras. https://photojournal.jpl.nasa.gov/catalog/PIA21937
Advanced image based methods for structural integrity monitoring: Review and prospects
NASA Astrophysics Data System (ADS)
Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.
2018-02-01
There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
NASA Astrophysics Data System (ADS)
Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.
2008-03-01
An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.
Experiments with recursive estimation in astronomical image processing
NASA Technical Reports Server (NTRS)
Busko, I.
1992-01-01
Recursive estimation concepts were applied to image enhancement problems since the 70's. However, very few applications in the particular area of astronomical image processing are known. These concepts were derived, for 2-dimensional images, from the well-known theory of Kalman filtering in one dimension. The historic reasons for application of these techniques to digital images are related to the images' scanned nature, in which the temporal output of a scanner device can be processed on-line by techniques borrowed directly from 1-dimensional recursive signal analysis. However, recursive estimation has particular properties that make it attractive even in modern days, when big computer memories make the full scanned image available to the processor at any given time. One particularly important aspect is the ability of recursive techniques to deal with non-stationary phenomena, that is, phenomena which have their statistical properties variable in time (or position in a 2-D image). Many image processing methods make underlying stationary assumptions either for the stochastic field being imaged, for the imaging system properties, or both. They will underperform, or even fail, when applied to images that deviate significantly from stationarity. Recursive methods, on the contrary, make it feasible to perform adaptive processing, that is, to process the image by a processor with properties tuned to the image's local statistical properties. Recursive estimation can be used to build estimates of images degraded by such phenomena as noise and blur. We show examples of recursive adaptive processing of astronomical images, using several local statistical properties to drive the adaptive processor, as average signal intensity, signal-to-noise and autocorrelation function. Software was developed under IRAF, and as such will be made available to interested users.
Novel wavelength diversity technique for high-speed atmospheric turbulence compensation
NASA Astrophysics Data System (ADS)
Arrasmith, William W.; Sullivan, Sean F.
2010-04-01
The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.
Simulation of Forward and Inverse X-ray Scattering From Shocked Materials
NASA Astrophysics Data System (ADS)
Barber, John; Marksteiner, Quinn; Barnes, Cris
2012-02-01
The next generation of high-intensity, coherent light sources should generate sufficient brilliance to perform in-situ coherent x-ray diffraction imaging (CXDI) of shocked materials. In this work, we present beginning-to-end simulations of this process. This includes the calculation of the partially-coherent intensity profiles of self-amplified stimulated emission (SASE) x-ray free electron lasers (XFELs), as well as the use of simulated, shocked molecular-dynamics-based samples to predict the evolution of the resulting diffraction patterns. In addition, we will explore the corresponding inverse problem by performing iterative phase retrieval to generate reconstructed images of the simulated sample. The development of these methods in the context of materials under extreme conditions should provide crucial insights into the design and capabilities of shocked in-situ imaging experiments.
Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C
2016-12-26
The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.
NASA Astrophysics Data System (ADS)
Wang, Youwen; Dai, Zhiping; Ling, Xiaohui; Chen, Liezun; Lu, Shizhuan; You, Kaiming
2016-11-01
In high-power laser system such as Petawatt lasers, the laser beam can be intense enough to result in saturation of nonlinear refraction index of medium. Based on the standard linearization method of small-scale self-focusing and the split-step Fourier numerical calculation method, we present analytical and simulative investigations on the hot-image formation in cascaded saturable nonlinear medium slabs, to disclose the effect of nonlinearity saturation on the distribution and intensity of hot images. The analytical and simulative results are found in good agreement. It is shown that, saturable nonlinearity does not change the distribution of hot images, while may greatly affect the intensity of hot images, i.e., for a given saturation light intensity, with the intensity of the incident laser beam, the intensity of hot images firstly increases monotonously and eventually reaches a saturation; for the incident laser beam of a given intensity, with the saturation light intensity lowering, the intensity of hot images decreases rapidly, even resulting in a few hot images too weak to be visible.
NASA Astrophysics Data System (ADS)
Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi
2004-06-01
We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.
NASA Astrophysics Data System (ADS)
Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming
2016-12-01
The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.
ALOS PALSAR Winter Coherence and Summer Intensities for Large Scale Forest Monitoring in Siberia
NASA Astrophysics Data System (ADS)
Thiel, Christian; Thiel, Carolin; Santoro, Maurizio; Schmullius, Christiane
2008-11-01
In this paper summer intensity and winter coherence images are used for large scale forest monitoring. The intensities (FBD HH/HV) have been acquired during summer 2007 and feature the K&C intensity stripes [1]. The processing consisted of radiometric calibration, orthorectification, and topographic normalisation. The coherence has been estimated from interferometric pairs with 46-days repeat-pass intervals. The pairs have been acquired during the winters 2006/2007 and 2007/2008. During both winters suited weather conditions have been reported. Interferometric processing consisted of SLC co-registration at sub-pixel level, common-band filtering in range and azimuth and generation of a differential interferogram, which was used in the coherence estimation procedure based on adaptive estimation. All images were geocoded using SRTM data. The pixel size of the final SAR products is 50 m x 50 m. It could already be demonstrated, that by using PALSAR intensities and winter coherence forest and non-forest can be clearly separated [2]. By combining both data types hardly any overlap of the class signatures was detected, even though the analysis was conducted on pixel level and no speckle filter has been applied. Thus, the delineation of a forest cover mask could be executed operationally. The major hitch is the definition of a biomass threshold for regrowing forest to be distinguished as forest.
Differential effects of cognitive load on emotion: Emotion maintenance versus passive experience.
DeFraine, William C
2016-06-01
Two separate lines of research have examined the effects of cognitive load on emotional processing with similar tasks but seemingly contradictory results. Some research has shown that the emotions elicited by passive viewing of emotional images are reduced by subsequent cognitive load. Other research has shown that such emotions are not reduced by cognitive load if the emotions are actively maintained. The present study sought to compare and resolve these 2 lines of research. Participants either passively viewed negative emotional images or maintained the emotions elicited by the images, and after a delay rated the intensity of the emotion they were feeling. Half of trials included a math task during the delay to induce cognitive load, and the other half did not. Results showed that cognitive load reduced the intensity of negative emotions during passive-viewing of emotional images but not during emotion maintenance. The present study replicates the findings of both lines of research, and shows that the key factor is whether or not emotions are actively maintained. Also, in the context of previous emotion maintenance research, the present results support the theoretical idea of a separable emotion maintenance process. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
Automated segmentation of three-dimensional MR brain images
NASA Astrophysics Data System (ADS)
Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee
2006-03-01
Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.
NASA Astrophysics Data System (ADS)
Liansheng, Sui; Yin, Cheng; Bing, Li; Ailing, Tian; Krishna Asundi, Anand
2018-07-01
A novel computational ghost imaging scheme based on specially designed phase-only masks, which can be efficiently applied to encrypt an original image into a series of measured intensities, is proposed in this paper. First, a Hadamard matrix with a certain order is generated, where the number of elements in each row is equal to the size of the original image to be encrypted. Each row of the matrix is rearranged into the corresponding 2D pattern. Then, each pattern is encoded into the phase-only masks by making use of an iterative phase retrieval algorithm. These specially designed masks can be wholly or partially used in the process of computational ghost imaging to reconstruct the original information with high quality. When a significantly small number of phase-only masks are used to record the measured intensities in a single-pixel bucket detector, the information can be authenticated without clear visualization by calculating the nonlinear correlation map between the original image and its reconstruction. The results illustrate the feasibility and effectiveness of the proposed computational ghost imaging mechanism, which will provide an effective alternative for enriching the related research on the computational ghost imaging technique.
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage
NASA Astrophysics Data System (ADS)
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution.
Live Cell Imaging and Measurements of Molecular Dynamics
Frigault, M.; Lacoste, J.; Swift, J.; Brown, C.
2010-01-01
w3-2 Live cell microscopy is becoming widespread across all fields of the life sciences, as well as, many areas of the physical sciences. In order to accurately obtain live cell microscopy data, the live specimens must be properly maintained on the imaging platform. In addition, the fluorescence light path must be optimized for efficient light transmission in order to reduce the intensity of excitation light impacting the living sample. With low incident light intensities the processes under study should not be altered due to phototoxic effects from the light allowing for the long term visualization of viable living samples. Aspects for maintaining a suitable environment for the living sample, minimizing incident light and maximizing detection efficiency will be presented for various fluorescence based live cell instruments. Raster Image Correlation Spectroscopy (RICS) is a technique that uses the intensity fluctuations within laser scanning confocal images, as well as the well characterized scanning dynamics of the laser beam, to extract the dynamics, concentrations and clustering of fluorescent molecules within the cell. In addition, two color cross-correlation RICS can be used to determine protein-protein interactions in living cells without the many technical difficulties encountered in FRET based measurements. RICS is an ideal live cell technique for measuring cellular dynamics because the potentially damaging high intensity laser bursts required for photobleaching recovery measurements are not required, rather low laser powers, suitable for imaging, can be used. The RICS theory will be presented along with examples of live cell applications.
Imaging model for the scintillator and its application to digital radiography image enhancement.
Wang, Qian; Zhu, Yining; Li, Hongwei
2015-12-28
Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.
Buckler, Andrew J; Bresolin, Linda; Dunnick, N Reed; Sullivan, Daniel C; Aerts, Hugo J W L; Bendriem, Bernard; Bendtsen, Claus; Boellaard, Ronald; Boone, John M; Cole, Patricia E; Conklin, James J; Dorfman, Gary S; Douglas, Pamela S; Eidsaunet, Willy; Elsinger, Cathy; Frank, Richard A; Gatsonis, Constantine; Giger, Maryellen L; Gupta, Sandeep N; Gustafson, David; Hoekstra, Otto S; Jackson, Edward F; Karam, Lisa; Kelloff, Gary J; Kinahan, Paul E; McLennan, Geoffrey; Miller, Colin G; Mozley, P David; Muller, Keith E; Patt, Rick; Raunig, David; Rosen, Mark; Rupani, Haren; Schwartz, Lawrence H; Siegel, Barry A; Sorensen, A Gregory; Wahl, Richard L; Waterton, John C; Wolf, Walter; Zahlmann, Gudrun; Zimmerman, Brian
2011-06-01
Quantitative imaging biomarkers could speed the development of new treatments for unmet medical needs and improve routine clinical care. However, it is not clear how the various regulatory and nonregulatory (eg, reimbursement) processes (often referred to as pathways) relate, nor is it clear which data need to be collected to support these different pathways most efficiently, given the time- and cost-intensive nature of doing so. The purpose of this article is to describe current thinking regarding these pathways emerging from diverse stakeholders interested and active in the definition, validation, and qualification of quantitative imaging biomarkers and to propose processes to facilitate the development and use of quantitative imaging biomarkers. A flexible framework is described that may be adapted for each imaging application, providing mechanisms that can be used to develop, assess, and evaluate relevant biomarkers. From this framework, processes can be mapped that would be applicable to both imaging product development and to quantitative imaging biomarker development aimed at increasing the effectiveness and availability of quantitative imaging. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100800/-/DC1. RSNA, 2011
Correcting geometric and photometric distortion of document images on a smartphone
NASA Astrophysics Data System (ADS)
Simon, Christian; Williem; Park, In Kyu
2015-01-01
A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.
The minimal local-asperity hypothesis of early retinal lateral inhibition.
Balboa, R M; Grzywacz, N M
2000-07-01
Recently we found that the theories related to information theory existent in the literature cannot explain the behavior of the extent of the lateral inhibition mediated by retinal horizontal cells as a function of background light intensity. These theories can explain the fall of the extent from intermediate to high intensities, but not its rise from dim to intermediate intensities. We propose an alternate hypothesis that accounts for the extent's bell-shape behavior. This hypothesis proposes that the lateral-inhibition adaptation in the early retina is part of a system to extract several image attributes, such as occlusion borders and contrast. To do so, this system would use prior probabilistic knowledge about the biological processing and relevant statistics in natural images. A key novel statistic used here is the probability of the presence of an occlusion border as a function of local contrast. Using this probabilistic knowledge, the retina would optimize the spatial profile of lateral inhibition to minimize attribute-extraction error. The two significant errors that this minimization process must reduce are due to the quantal noise in photoreceptors and the straddling of occlusion borders by lateral inhibition.
NASA Astrophysics Data System (ADS)
Handa, Taketo; Okano, Makoto; Tex, David M.; Shimazaki, Ai; Aharen, Tomoko; Wakamiya, Atsushi; Kanemitsu, Yoshihiko
2016-02-01
Organic-inorganic hybrid perovskite materials, CH3NH3PbX3 (X = I and Br), are considered as promising candidates for emerging thin-film photovoltaics. For practical implementation, the degradation mechanism and the carrier dynamics during operation have to be clarified. We investigated the degradation mechanism and the carrier injection and recombination processes in perovskite CH3NH3PbI3 solar cells using photoluminescence (PL) and electroluminescence (EL) imaging spectroscopies. By applying forward bias-voltage, an inhomogeneous distribution of the EL intensity was clearly observed from the CH3NH3PbI3 solar cells. By comparing the PL- and EL-images, we revealed that the spatial inhomogeneity of the EL intensity is a result of the inhomogeneous luminescence efficiency in the perovskite layer. An application of bias-voltage for several tens of minutes in air caused a decrease in the EL intensity and the conversion efficiency of the perovskite solar cells. The degradation mechanism of perovskite solar cells under bias-voltage in air is discussed.
Bókkon, I; Salari, V; Tuszynski, J A; Antal, I
2010-09-02
Recently, we have proposed a redox molecular hypothesis about the natural biophysical substrate of visual perception and imagery [1,6]. Namely, the retina transforms external photon signals into electrical signals that are carried to the V1 (striatecortex). Then, V1 retinotopic electrical signals (spike-related electrical signals along classical axonal-dendritic pathways) can be converted into regulated ultraweak bioluminescent photons (biophotons) through redox processes within retinotopic visual neurons that make it possible to create intrinsic biophysical pictures during visual perception and imagery. However, the consensus opinion is to consider biophotons as by-products of cellular metabolism. This paper argues that biophotons are not by-products, other than originating from regulated cellular radical/redox processes. It also shows that the biophoton intensity can be considerably higher inside cells than outside. Our simple calculations, within a level of accuracy, suggest that the real biophoton intensity in retinotopic neurons may be sufficient for creating intrinsic biophysical picture representation of a single-object image during visual perception. Copyright (c) 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, W; Jung, J; Kang, Y
Purpose: To quantitatively analyze the influence image processing for Moire elimination has in digital radiography by comparing the image acquired from optimized anti-scattered grid only and the image acquired from software processing paired with misaligned low-frequency grid. Methods: Special phantom, which does not create scattered radiation, was used to acquire non-grid reference images and they were acquired without any grids. A set of images was acquired with optimized grid, aligned to pixel of a detector and other set of images was acquired with misaligned low-frequency grid paired with Moire elimination processing algorithm. X-ray technique used was based on consideration tomore » Bucky factor derived from non-grid reference images. For evaluation, we analyze by comparing pixel intensity of acquired images with grids to that of reference images. Results: When compared to image acquired with optimized grid, images acquired with Moire elimination processing algorithm showed 10 to 50% lower mean contrast value of ROI. Severe distortion of images was found with when the object’s thickness was measured at 7 or less pixels. In this case, contrast value measured from images acquired with Moire elimination processing algorithm was under 30% of that taken from reference image. Conclusion: This study shows the potential risk of Moire compensation images in diagnosis. Images acquired with misaligned low-frequency grid results in Moire noise and Moire compensation processing algorithm used to remove this Moire noise actually caused an image distortion. As a result, fractures and/or calcifications which are presented in few pixels only may not be diagnosed properly. In future work, we plan to evaluate the images acquired without grid but based on 100% image processing and the potential risks it possesses.« less
Multiplexed aberration measurement for deep tissue imaging in vivo
Wang, Chen; Liu, Rui; Milkie, Daniel E.; Sun, Wenzhi; Tan, Zhongchao; Kerlin, Aaron; Chen, Tsai-Wen; Kim, Douglas S.; Ji, Na
2014-01-01
We describe a multiplexed aberration measurement method that modulates the intensity or phase of light rays at multiple pupil segments in parallel to determine their phase gradients. Applicable to fluorescent-protein-labeled structures of arbitrary complexity, it allows us to obtain diffraction-limited resolution in various samples in vivo. For the strongly scattering mouse brain, a single aberration correction improves structural and functional imaging of fine neuronal processes over a large imaging volume. PMID:25128976
A RESOLVED NEAR-INFRARED IMAGE OF THE INNER CAVITY IN THE GM Aur TRANSITIONAL DISK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Daehyeon; Yang, Yi; Hashimoto, Jun
We present high-contrast H -band polarized intensity (PI) images of the transitional disk around the young solar-like star GM Aur. The near-infrared direct imaging of the disk was derived by polarimetric differential imaging using the Subaru 8.2 m Telescope and HiCIAO. An angular resolution and an inner working angle of 0.″07 and r ∼ 0.″05, respectively, were obtained. We clearly resolved a large inner cavity, with a measured radius of 18 ± 2 au, which is smaller than that of a submillimeter interferometric image (28 au). This discrepancy in the cavity radii at near-infrared and submillimeter wavelengths may be causedmore » by a 3–4 M {sub Jup} planet about 20 au away from the star, near the edge of the cavity. The presence of a near-infrared inner cavity is a strong constraint on hypotheses for inner cavity formation in a transitional disk. A dust filtration mechanism has been proposed to explain the large cavity in the submillimeter image, but our results suggest that this mechanism must be combined with an additional process. We found that the PI slope of the outer disk is significantly different from the intensity slope obtained from HST /NICMOS, and this difference may indicate the grain growth process in the disk.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images
NASA Astrophysics Data System (ADS)
Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.
2016-03-01
Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.
Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance
2017-01-01
This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529
GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array
Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.
2014-01-01
Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
Vision-aided Monitoring and Control of Thermal Spray, Spray Forming, and Welding Processes
NASA Technical Reports Server (NTRS)
Agapakis, John E.; Bolstad, Jon
1993-01-01
Vision is one of the most powerful forms of non-contact sensing for monitoring and control of manufacturing processes. However, processes involving an arc plasma or flame such as welding or thermal spraying pose particularly challenging problems to conventional vision sensing and processing techniques. The arc or plasma is not typically limited to a single spectral region and thus cannot be easily filtered out optically. This paper presents an innovative vision sensing system that uses intense stroboscopic illumination to overpower the arc light and produce a video image that is free of arc light or glare and dedicated image processing and analysis schemes that can enhance the video images or extract features of interest and produce quantitative process measures which can be used for process monitoring and control. Results of two SBIR programs sponsored by NASA and DOE and focusing on the application of this innovative vision sensing and processing technology to thermal spraying and welding process monitoring and control are discussed.
Multimedia systems in ultrasound image boundary detection and measurements
NASA Astrophysics Data System (ADS)
Pathak, Sayan D.; Chalana, Vikram; Kim, Yongmin
1997-05-01
Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
Weirich, S D; Cotler, H B; Narayana, P A; Hazle, J D; Jackson, E F; Coupe, K J; McDonald, C L; Langford, L A; Harris, J H
1990-07-01
Magnetic resonance imaging (MRI) provides a noninvasive method of monitoring the pathologic response to spinal cord injury. Specific MR signal intensity patterns appear to correlate with degrees of improvement in the neurologic status in spinal cord injury patients. Histologic correlation of two types of MR signal intensity patterns are confirmed in the current study using a rat animal model. Adult male Sprague-Dawley rats underwent spinal cord trauma at the midthoracic level using a weight-dropping technique. After laminectomy, 5- and 10-gm brass weights were dropped from designated heights onto a 0.1-gm impounder placed on the exposed dura. Animals allowed to regain consciousness demonstrated variable recovery of hind limb paraplegia. Magnetic resonance images were obtained from 2 hours to 1 week after injury using a 2-tesla MRI/spectrometer. Sacrifice under anesthesia was performed by perfusive fixation; spinal columns were excised en bloc, embedded, sectioned, and observed with the compound light microscope. Magnetic resonance axial images obtained during the time sequence after injury demonstrate a distinct correlation between MR signal intensity patterns and the histologic appearance of the spinal cord. Magnetic resonance imaging delineates the pathologic processes resulting from acute spinal cord injury and can be used to differentiate the type of injury and prognosis.
Artifacts in magnetic spirals retrieved by transport of intensity equation (TIE)
NASA Astrophysics Data System (ADS)
Cui, J.; Yao, Y.; Shen, X.; Wang, Y. G.; Yu, R. C.
2018-05-01
The artifacts in the magnetic structures reconstructed from Lorentz transmission electron microscopy (LTEM) images with TIE method have been analyzed in detail. The processing for the simulated images of Bloch and Neel spirals indicated that the improper parameters in TIE may overestimate the high frequency information and induce some false features in the retrieved images. The specimen tilting will further complicate the analysis of the images because the LTEM image contrast is not the result of the magnetization distribution within the specimen but the integral projection pattern of the magnetic induction filling the entire space including the specimen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, S. F.; Izumi, N.; Glenn, S.
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Quantification of fibre polymerization through Fourier space image analysis
Nekouzadeh, Ali; Genin, Guy M.
2011-01-01
Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096
Reticle stage based linear dosimeter
Berger, Kurt W [Livermore, CA
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
CATE 2016 Indonesia: Image Calibration, Intensity Calibration, and Drift Scan
NASA Astrophysics Data System (ADS)
Hare, H. S.; Kovac, S. A.; Jensen, L.; McKay, M. A.; Bosh, R.; Watson, Z.; Mitchell, A. M.; Penn, M. J.
2016-12-01
The citizen Continental America Telescopic Eclipse (CATE) experiment aims to provide equipment for 60 sites across the path of totality for the United States August 21st, 2017 total solar eclipse. The opportunity to gather ninety minutes of continuous images of the solar corona is unmatched by any other previous eclipse event. In March of 2016, 5 teams were sent to Indonesia to test CATE equipment and procedures on the March 9th, 2016 total solar eclipse. Also, a goal of the trip was practice and gathering data to use in testing data reduction methods. Of the five teams, four collected data. While in Indonesia, each group participated in community outreach in the location of their site. The 2016 eclipse allowed CATE to test the calibration techniques for the 2017 eclipse. Calibration dark current and flat field images were collected to remove variation across the cameras. Drift scan observations provided information to rotationally align the images from each site. These image's intensity values allowed for intensity calibration for each of the sites. A GPS at each site corrected for major computer errors in time measurement of images. Further refinement of these processes is required before the 2017 eclipse. This work was made possible through the NSO Training for the 2017 Citizen CATE Experiment funded by NASA (NASA NNX16AB92A).
Medical image registration based on normalized multidimensional mutual information
NASA Astrophysics Data System (ADS)
Li, Qi; Ji, Hongbing; Tong, Ming
2009-10-01
Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.
Registration of 2D to 3D joint images using phase-based mutual information
NASA Astrophysics Data System (ADS)
Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul
2007-03-01
Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.
Barua, Animesh; Yellapa, Aparna; Bahr, Janice M; Adur, Malavika K; Utterback, Chet W; Bitterman, Pincas; Basu, Sanjib; Sharma, Sameer; Abramowicz, Jacques S
2015-01-01
Limited resolution of transvaginal ultrasound (TVUS) scanning is a significant barrier to early detection of ovarian cancer (OVCA). Contrast agents have been suggested to improve the resolution of TVUS scanning. Emerging evidence suggests that expression of interleukin 16 (IL-16) by the tumor epithelium and microvessels increases in association with OVCA development and offers a potential target for early OVCA detection. The goal of this study was to examine the feasibility of IL-16-targeted contrast agents in enhancing the intensity of ultrasound imaging from ovarian tumors in hens, a model of spontaneous OVCA. Contrast agents were developed by conjugating biotinylated anti-IL-16 antibodies with streptavidin coated microbubbles. Enhancement of ultrasound signal intensity was determined before and after injection of contrast agents. Following scanning, ovarian tissues were processed for the detection of IL-16 expressing cells and microvessels. Compared with precontrast, contrast imaging enhanced ultrasound signal intensity significantly in OVCA hens at early (P < 0.05) and late stages (P < 0.001). Higher intensities of ultrasound signals in OVCA hens were associated with increased frequencies of IL-16 expressing cells and microvessels. These results suggest that IL-16-targeted contrast agents improve the visualization of ovarian tumors. The laying hen may be a suitable model to test new imaging agents and develop targeted anti-OVCA therapeutics.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
NASA Astrophysics Data System (ADS)
Civale, John; Ter Haar, Gail; Rivens, Ian; Bamber, Jeff
2005-09-01
Currently, the intensity to be used in our clinical HIFU treatments is calculated from the acoustic path lengths in different tissues measured on diagnostic ultrasound images of the patient in the treatment position, and published values of ultrasound attenuation coefficients. This yields an approximate value for the acoustic power at the transducer required to give a stipulated focal intensity in situ. Estimation methods for the actual acoustic attenuation have been investigated in large parts of the tissue path overlying the target volume from the backscattered ultrasound signal for each patient (backscatter attenuation estimation: BAE). Several methods have been investigated. The backscattered echo information acquired from an Acuson scanner has been used to compute the diffraction-corrected attenuation coefficient at each frequency using two methods: a substitution method and an inverse diffraction filtering process. A homogeneous sponge phantom was used to validate the techniques. The use of BAE to determine the correct HIFU exposure parameters for lesioning has been tested in ex vivo liver. HIFU lesions created with a 1.7-MHz therapy transducer have been studied using a semiautomated image processing technique. The reproducibility of lesion size for given in situ intensities determined using BAE and empirical techniques has been compared.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Automated analysis of hot spot X-ray images at the National Ignition Facility
NASA Astrophysics Data System (ADS)
Khan, S. F.; Izumi, N.; Glenn, S.; Tommasini, R.; Benedetti, L. R.; Ma, T.; Pak, A.; Kyrala, G. A.; Springer, P.; Bradley, D. K.; Town, R. P. J.
2016-11-01
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ˜4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility
Khan, S. F.; Izumi, N.; Glenn, S.; ...
2016-09-02
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, S. F., E-mail: khan9@llnl.gov; Izumi, N.; Glenn, S.
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility.
Khan, S F; Izumi, N; Glenn, S; Tommasini, R; Benedetti, L R; Ma, T; Pak, A; Kyrala, G A; Springer, P; Bradley, D K; Town, R P J
2016-11-01
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
MR techniques for guiding high-intensity focused ultrasound (HIFU) treatments.
Kuroda, Kagayaki
2018-02-01
To make full use of the ability of magnetic resonance (MR) to guide high-intensity focused ultrasound (HIFU) treatment, effort has been made to improve techniques for thermometry, motion tracking, and sound beam visualization. For monitoring rapid temperature elevation with proton resonance frequency (PRF) shift, data acquisition and processing can be accelerated with parallel imaging and/or sparse sampling in conjunction with appropriate signal processing methods. Thermometry should be robust against tissue motion, motion-induced magnetic field variation, and susceptibility change. Thus, multibaseline, referenceless, or hybrid techniques have become important. In cases with adipose or bony tissues, for which PRF shift cannot be used, thermometry with relaxation times or signal intensity may be utilized. Motion tracking is crucial not only for thermometry but also for targeting the focus of an ultrasound in moving organs such as the liver, kidney, or heart. Various techniques for motion tracking, such as those based on an anatomical image atlas with optical-flow displacement detection, a navigator echo to seize the diaphragm position, and/or rapid imaging to track vessel positions, have been proposed. Techniques for avoiding the ribcage and near-field heating have also been examined. MR acoustic radiation force imaging (MR-ARFI) is an alternative to thermometry that can identify the location and shape of the focal spot and sound beam path. This technique could be useful for treating heterogeneous tissue regions or performing transcranial therapy. All of these developments, which will be discussed further in this review, expand the applicability of HIFU treatments to a variety of clinical targets while maintaining safety and precision. 2 Technical Efficacy: Stage 4 J. Magn. Reson. Imaging 2018;47:316-331. © 2017 International Society for Magnetic Resonance in Medicine.
Intensity non-uniformity correction in MRI: existing methods and their validation.
Belaroussi, Boubakeur; Milles, Julien; Carme, Sabin; Zhu, Yue Min; Benoit-Cattin, Hugues
2006-04-01
Magnetic resonance imaging is a popular and powerful non-invasive imaging technique. Automated analysis has become mandatory to efficiently cope with the large amount of data generated using this modality. However, several artifacts, such as intensity non-uniformity, can degrade the quality of acquired data. Intensity non-uniformity consists in anatomically irrelevant intensity variation throughout data. It can be induced by the choice of the radio-frequency coil, the acquisition pulse sequence and by the nature and geometry of the sample itself. Numerous methods have been proposed to correct this artifact. In this paper, we propose an overview of existing methods. We first sort them according to their location in the acquisition/processing pipeline. Sorting is then refined based on the assumptions those methods rely on. Next, we present the validation protocols used to evaluate these different correction schemes both from a qualitative and a quantitative point of view. Finally, availability and usability of the presented methods is discussed.
Teistler, M; Breiman, R S; Lison, T; Bott, O J; Pretschner, D P; Aziz, A; Nowinski, W L
2008-10-01
Volumetric imaging (computed tomography and magnetic resonance imaging) provides increased diagnostic detail but is associated with the problem of navigation through large amounts of data. In an attempt to overcome this problem, a novel 3D navigation tool has been designed and developed that is based on an alternative input device. A 3D mouse allows for simultaneous definition of position and orientation of orthogonal or oblique multiplanar reformatted images or slabs, which are presented within a virtual 3D scene together with the volume-rendered data set and additionally as 2D images. Slabs are visualized with maximum intensity projection, average intensity projection, or standard volume rendering technique. A prototype has been implemented based on PC technology that has been tested by several radiologists. It has shown to be easily understandable and usable after a very short learning phase. Our solution may help to fully exploit the diagnostic potential of volumetric imaging by allowing for a more efficient reading process compared to currently deployed solutions based on conventional mouse and keyboard.
A fluorescent molecular rotor probes the kinetic process of degranulation of mast cells.
Furuno, T; Isoda, R; Inagaki, K; Iwaki, T; Noji, M; Nakanishi, M
1992-08-01
A confocal fluorescence microscope was used to study the exocytotic secretory processes of mast cells in combination with an fluorescent molecular rotor, 9-(dicyanovinyl)julolidine (DCVJ). DCVJ is known to be an unique fluorescent dye which increases its quantum yield with decreasing intramolecular rotation. Here, DCVJ-loaded peritoneal rat mast cells were stimulated with compound 48/80 and their fluorescence images were compared with fluorescence calcium images of fluo-3-loaded mast cells. Subsequent to transient increases in intracellular free calcium ion concentration, DCVJ fluorescence increased dramatically in the cytoplasm and formed a ring-like structure around the nucleus, suggesting the possibility that the dye bound to the proteins composing the cytoskeletal architecture. Furthermore, the increases of DCVJ fluorescence intensities were mostly blocked in the presence of cytochalasin D (10 microM). However, fluo-3 fluorescence intensities still increased after addition of compound 48/80.
NASA Astrophysics Data System (ADS)
Ye, Xujiong; Siddique, Musib; Douiri, Abdel; Beddoe, Gareth; Slabaugh, Greg
2009-02-01
Automatic segmentation of medical images is a challenging problem due to the complexity and variability of human anatomy, poor contrast of the object being segmented, and noise resulting from the image acquisition process. This paper presents a novel feature-guided method for the segmentation of 3D medical lesions. The proposed algorithm combines 1) a volumetric shape feature (shape index) based on high-order partial derivatives; 2) mean shift clustering in a joint spatial-intensity-shape (JSIS) feature space; and 3) a modified expectation-maximization (MEM) algorithm on the mean shift mode map to merge the neighboring regions (modes). In such a scenario, the volumetric shape feature is integrated into the process of the segmentation algorithm. The joint spatial-intensity-shape features provide rich information for the segmentation of the anatomic structures or lesions (tumors). The proposed method has been evaluated on a clinical dataset of thoracic CT scans that contains 68 nodules. A volume overlap ratio between each segmented nodule and the ground truth annotation is calculated. Using the proposed method, the mean overlap ratio over all the nodules is 0.80. On visual inspection and using a quantitative evaluation, the experimental results demonstrate the potential of the proposed method. It can properly segment a variety of nodules including juxta-vascular and juxta-pleural nodules, which are challenging for conventional methods due to the high similarity of intensities between the nodules and their adjacent tissues. This approach could also be applied to lesion segmentation in other anatomies, such as polyps in the colon.
A method for normalizing pathology images to improve feature extraction for quantitative pathology.
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
2016-01-01
With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.
Low-level processing for real-time image analysis
NASA Technical Reports Server (NTRS)
Eskenazi, R.; Wilf, J. M.
1979-01-01
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.
A survey of camera error sources in machine vision systems
NASA Astrophysics Data System (ADS)
Jatko, W. B.
In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Qi, Chang; Changlin, Huang
2007-07-01
To examine the association between levers of cartilage oligomeric matrix protein (COMP), matrix metalloproteinases-1 (MMP-1), matrix metalloproteinases-3 (MMP-3), tissue inhibitor of matrix metalloproteinases-1 (TIMP-1) in serum and synovial fluid, and MR imaging of cartilage degeneration in knee joint, and to understand the effects of movement training with different intensity on cartilage of knee joint. 20 adult canines were randomly divided into three groups (8 in the light training group; 8 in the intensive training group; 4 in the control group), and canines of the two training groups were trained daily at different intensity. The training lasted for 10 weeks in all. Magnetic resonance imaging (MRI) examinations were performed regularly (2, 4, 6, 8, 10 week) to investigate the changes of articular cartilage in the canine knee, while concentrations of COMP, MMP-1, MMP-3, TIMP-1 in serum and synovial fluid were measured by ELISA assays. We could find imaging changes of cartilage degeneration in both the training groups by MRI examination during training period, compared with the control group. However, there was no significant difference between these two training groups. Elevations of levels of COMP, MMP-1, MMP-3, TIMP-1, MMP-3/TIMP-1 were seen in serum and synovial fluid after training, and their levels had obvious association with knee MRI grades of cartilage lesion. Furthermore, there were statistically significant associations between biomarkers levels in serum and in synovial fluid. Long-time and high-intensity movement training induces cartilage degeneration in knee joint. Within the intensity extent applied in this study, knee cartilage degeneration caused by light training or intensive training has no difference in MR imaging, but has a comparatively obvious difference in biomarkers level. To detect articular cartilage degeneration in early stage and monitor pathological process, the associated application of several biomarkers has a very good practical value, and can be used as a helpful supplement to MRI.
Digital imaging of autoradiographs from paintings by Georges de La Tour (1593-1652)
NASA Astrophysics Data System (ADS)
Fischer, C.-O.; Gallagher, M.; Laurenze, C.; Schmidt, Ch; Slusallek, K.
1999-11-01
The artistic work of the painter Georges de La Tour has been studied very intensively in the last few years, mainly by French and US-American art historians and natural scientists. To support the in-depth analysis of two paintings from the Kimbell Art Museum in Fort Worth, Texas, USA, two similar paintings from the Gemäldegalerie Berlin have been investigated. The method of neutron activation autoradiography has been applied using imaging plates with digital image processing.
A special vegetation index for the weed detection in sensor based precision agriculture.
Langner, Hans-R; Böttger, Hartmut; Schmidt, Helmut
2006-06-01
Many technologies in precision agriculture (PA) require image analysis and image- processing with weed and background differentiations. The detection of weeds on mulched cropland is one important image-processing task for sensor based precision herbicide applications. The article introduces a special vegetation index, the Difference Index with Red Threshold (DIRT), for the weed detection on mulched croplands. Experimental investigations in weed detection on mulched areas point out that the DIRT performs better than the Normalized Difference Vegetation Index (NDVI). The result of the evaluation with four different decision criteria indicate, that the new DIRT gives the highest reliability in weed/background differentiation on mulched areas. While using the same spectral bands (infrared and red) as the NDVI, the new DIRT is more suitable for weed detection than the other vegetation indices and requires only a small amount of additional calculation power. The new vegetation index DIRT was tested on mulched areas during automatic ratings with a special weed camera system. The test results compare the new DIRT and three other decision criteria: the difference between infrared and red intensity (Diff), the soil-adjusted quotient between infrared and red intensity (Quotient) and the NDVI. The decision criteria were compared with the definition of a worse case decision quality parameter Q, suitable for mulched croplands. Although this new index DIRT needs further testing, the index seems to be a good decision criterion for the weed detection on mulched areas and should also be useful for other image processing applications in precision agriculture. The weed detection hardware and the PC program for the weed image processing were developed with funds from the German Federal Ministry of Education and Research (BMBF).
NASA Astrophysics Data System (ADS)
Kwok, Ngaiming; Shi, Haiyan; Peng, Yeping; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Rahman, Md Arifur
2018-04-01
Restoring images captured under low-illuminations is an essential front-end process for most image based applications. The Center-Surround Retinex algorithm has been a popular approach employed to improve image brightness. However, this algorithm in its basic form, is known to produce color degradations. In order to mitigate this problem, here the Single-Scale Retinex algorithm is modifid as an edge extractor while illumination is recovered through a non-linear intensity mapping stage. The derived edges are then integrated with the mapped image to produce the enhanced output. Furthermore, in reducing color distortion, the process is conducted in the magnitude sorted domain instead of the conventional Red-Green-Blue (RGB) color channels. Experimental results had shown that improvements with regard to mean brightness, colorfulness, saturation, and information content can be obtained.
Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S
2014-10-01
Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.
Female pelvic synthetic CT generation based on joint intensity and shape analysis
NASA Astrophysics Data System (ADS)
Liu, Lianli; Jolly, Shruti; Cao, Yue; Vineberg, Karen; Fessler, Jeffrey A.; Balter, James M.
2017-04-01
Using MRI for radiotherapy treatment planning and image guidance is appealing as it provides superior soft tissue information over CT scans and avoids possible systematic errors introduced by aligning MR to CT images. This study presents a method that generates Synthetic CT (MRCT) volumes by performing probabilistic tissue classification of voxels from MRI data using a single imaging sequence (T1 Dixon). The intensity overlap between different tissues on MR images, a major challenge for voxel-based MRCT generation methods, is addressed by adding bone shape information to an intensity-based classification scheme. A simple pelvic bone shape model, built from principal component analysis of pelvis shape from 30 CT image volumes, is fitted to the MR volumes. The shape model generates a rough bone mask that excludes air and covers bone along with some surrounding soft tissues. Air regions are identified and masked out from the tissue classification process by intensity thresholding outside the bone mask. A regularization term is added to the fuzzy c-means classification scheme that constrains voxels outside the bone mask from being assigned memberships in the bone class. MRCT image volumes are generated by multiplying the probability of each voxel being represented in each class with assigned attenuation values of the corresponding class and summing the result across all classes. The MRCT images presented intensity distributions similar to CT images with a mean absolute error of 13.7 HU for muscle, 15.9 HU for fat, 49.1 HU for intra-pelvic soft tissues, 129.1 HU for marrow and 274.4 HU for bony tissues across 9 patients. Volumetric modulated arc therapy (VMAT) plans were optimized using MRCT-derived electron densities, and doses were recalculated using corresponding CT-derived density grids. Dose differences to planning target volumes were small with mean/standard deviation of 0.21/0.42 Gy for D0.5cc and 0.29/0.33 Gy for D99%. The results demonstrate the accuracy of the method and its potential in supporting MRI only radiotherapy treatment planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
A Rare Case of Malignant Melanoma of the Mandible: CT and MRI Findings.
Ogura, Ichiro; Sasaki, Yoshihiko; Kameta, Ayako; Sue, Mikiko; Oda, Takaaki
Malignant melanoma of the mandibular gingiva is extremely rare. It is a malignant tumour of melanocytes or their precursor cells, and often misinterpreted as a benign pigmented process. A few reports have described computed tomography (CT) and magnetic resonance imaging (MRI) findings of malignant melanoma in the oral cavity. We report a rare case of malignant melanoma of the mandible and the related CT and MRI findings. Soft tissue algorithm contrast-enhanced CT showed an expansile mass and irregular destruction of alveolar bone in the right side of the mandibular molar area. MR images showed an enhancing mass and the tumour had a low to intermediate signal intensity and a high-signal intensity. Soft tissue algorithm contrast-enhanced CT and MR images showed lymphadenopathy involving the submandibular lymph nodes. Histopathological examination confirmed the diagnosis of malignant melanoma.
A new region-edge based level set model with applications to image segmentation
NASA Astrophysics Data System (ADS)
Zhi, Xuhao; Shen, Hong-Bin
2018-04-01
Level set model has advantages in handling complex shapes and topological changes, and is widely used in image processing tasks. The image segmentation oriented level set models can be grouped into region-based models and edge-based models, both of which have merits and drawbacks. Region-based level set model relies on fitting to color intensity of separated regions, but is not sensitive to edge information. Edge-based level set model evolves by fitting to local gradient information, but can get easily affected by noise. We propose a region-edge based level set model, which considers saliency information into energy function and fuses color intensity with local gradient information. The evolution of the proposed model is implemented by a hierarchical two-stage protocol, and the experimental results show flexible initialization, robust evolution and precise segmentation.
Banić, Nikola; Lončarić, Sven
2015-11-01
Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.
Girod, Marion; Shi, Yunzhou; Cheng, Ji-Xin; Cooks, R. Graham
2010-01-01
Desorption electrospray ionization (DESI) mass spectrometry is used in an imaging mode to interrogate the lipid profiles of 15 µm thin tissues cross sections of injured rat spinal cord and normal healthy tissue. Increased relative intensities of fatty acids, diacylglycerols and lysolipids (between +120% and +240%) as well as a small decrease in intensities of lipids (−30%) were visualized in the lesion epi-center and adjacent areas after spinal cord injury. This indicates the hydrolysis of lipids during the demyelination process due to activation of phospholipase A2 enzyme. In addition, signals corresponding to oxidative degradation products, such as prostaglandin and hydroxyeicosatetraenoic acid, exhibited increased signal intensity by a factor of two in the negative ion mode in lesions relative to the normal healthy tissue. Analysis of malondialdehyde, a product of lipid peroxidation and marker of oxidative stress, was accomplished in the ambient environment using reactive DESI mass spectrometry imaging. This was achieved by electrospraying reagent solution containing dinitrophenylhydrazine as high velocity charged droplets onto the tissue section. The hydrazine reacts selectively and rapidly with the carbonyl groups of malondialdehyde and signal intensity of twice the intensity was detected in the lesions compared to healthy spinal cord. With a small amount of tissue sample, DESI-MS imaging provides information on the composition and distribution of specific compounds (limited by the occurrence of isomeric lipids with very similar fragmentation patterns) in lesions after spinal cord injury in comparison with normal healthy tissue allowing identification of the extent of the lesion and its repair. PMID:21142140
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
Computer Vision for Artificially Intelligent Robotic Systems
NASA Astrophysics Data System (ADS)
Ma, Chialo; Ma, Yung-Lung
1987-04-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
NASA Astrophysics Data System (ADS)
Ma, Yung-Lung; Ma, Chialo
1987-03-01
In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.
System for Thermal Imaging of Hot Moving Objects
NASA Technical Reports Server (NTRS)
Weinstein, Leonard; Hundley, Jason
2007-01-01
The High Altitude/Re-Entry Vehicle Infrared Imaging (HARVII) system is a portable instrumentation system for tracking and thermal imaging of a possibly distant and moving object. The HARVII is designed specifically for measuring the changing temperature distribution on a space shuttle as it reenters the atmosphere. The HARVII system or other systems based on the design of the HARVII system could also be used for such purposes as determining temperature distributions in fires, on volcanoes, and on surfaces of hot models in wind tunnels. In yet another potential application, the HARVII or a similar system would be used to infer atmospheric pollution levels from images of the Sun acquired at multiple wavelengths over regions of interest. The HARVII system includes the Ratio Intensity Thermography System (RITS) and a tracking subsystem that keeps the RITS aimed at the moving object of interest. The subsystem of primary interest here is the RITS (see figure), which acquires and digitizes images of the same scene at different wavelengths in rapid succession. Assuming that the time interval between successive measurements is short enough that temperatures do not change appreciably, the digitized image data at the different wavelengths are processed to extract temperatures according to the principle of ratio-intensity thermography: The temperature at a given location in a scene is inferred from the ratios between or among intensities of infrared radiation from that location at two or more wavelengths. This principle, based on the Stefan-Boltzmann equation for the intensity of electromagnetic radiation as a function of wavelength and temperature, is valid as long as the observed body is a gray or black body and there is minimal atmospheric absorption of radiation.
Skeletonization of gray-scale images by gray weighted distance transform
NASA Astrophysics Data System (ADS)
Qian, Kai; Cao, Siqi; Bhattacharya, Prabir
1997-07-01
In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
Optimal design of photoreceptor mosaics: why we do not see color at night.
Manning, Jeremy R; Brainard, David H
2009-01-01
While color vision mediated by rod photoreceptors in dim light is possible (Kelber & Roth, 2006), most animals, including humans, do not see in color at night. This is because their retinas contain only a single class of rod photoreceptors. Many of these same animals have daylight color vision, mediated by multiple classes of cone photoreceptors. We develop a general formulation, based on Bayesian decision theory, to evaluate the efficacy of various retinal photoreceptor mosaics. The formulation evaluates each mosaic under the assumption that its output is processed to optimally estimate the image. It also explicitly takes into account the statistics of the environmental image ensemble. Using the general formulation, we consider the trade-off between monochromatic and dichromatic retinal designs as a function of overall illuminant intensity. We are able to demonstrate a set of assumptions under which the prevalent biological pattern represents optimal processing. These assumptions include an image ensemble characterized by high correlations between image intensities at nearby locations, as well as high correlations between intensities in different wavelength bands. They also include a constraint on receptor photopigment biophysics and/or the information carried by different wavelengths that produces an asymmetry in the signal-to-noise ratio of the output of different receptor classes. Our results thus provide an optimality explanation for the evolution of color vision for daylight conditions and monochromatic vision for nighttime conditions. An additional result from our calculations is that regular spatial interleaving of two receptor classes in a dichromatic retina yields performance superior to that of a retina where receptors of the same class are clumped together.
NASA Astrophysics Data System (ADS)
Kerekes, Ryan A.; Gleason, Shaun S.; Trivedi, Niraj; Solecki, David J.
2010-03-01
Segmentation, tracking, and tracing of neurons in video imagery are important steps in many neuronal migration studies and can be inaccurate and time-consuming when performed manually. In this paper, we present an automated method for tracing the leading and trailing processes of migrating neurons in time-lapse image stacks acquired with a confocal fluorescence microscope. In our approach, we first locate and track the soma of the cell of interest by smoothing each frame and tracking the local maxima through the sequence. We then trace the leading process in each frame by starting at the center of the soma and stepping repeatedly in the most likely direction of the leading process. This direction is found at each step by examining second derivatives of fluorescent intensity along curves of constant radius around the current point. Tracing terminates after a fixed number of steps or when fluorescent intensity drops below a fixed threshold. We evolve the resulting trace to form an improved trace that more closely follows the approximate centerline of the leading process. We apply a similar algorithm to the trailing process of the cell by starting the trace in the opposite direction. We demonstrate our algorithm on two time-lapse confocal video sequences of migrating cerebellar granule neurons (CGNs). We show that the automated traces closely approximate ground truth traces to within 1 or 2 pixels on average. Additionally, we compute line intensity profiles of fluorescence along the automated traces and quantitatively demonstrate their similarity to manually generated profiles in terms of fluorescence peak locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Yang, D
Purpose: To investigate the method to automatically recognize the treatment site in the X-Ray portal images. It could be useful to detect potential treatment errors, and to provide guidance to sequential tasks, e.g. automatically verify the patient daily setup. Methods: The portal images were exported from MOSAIQ as DICOM files, and were 1) processed with a threshold based intensity transformation algorithm to enhance contrast, and 2) where then down-sampled (from 1024×768 to 128×96) by using bi-cubic interpolation algorithm. An appearance-based vector space model (VSM) was used to rearrange the images into vectors. A principal component analysis (PCA) method was usedmore » to reduce the vector dimensions. A multi-class support vector machine (SVM), with radial basis function kernel, was used to build the treatment site recognition models. These models were then used to recognize the treatment sites in the portal image. Portal images of 120 patients were included in the study. The images were selected to cover six treatment sites: brain, head and neck, breast, lung, abdomen and pelvis. Each site had images of the twenty patients. Cross-validation experiments were performed to evaluate the performance. Results: MATLAB image processing Toolbox and scikit-learn (a machine learning library in python) were used to implement the proposed method. The average accuracies using the AP and RT images separately were 95% and 94% respectively. The average accuracy using AP and RT images together was 98%. Computation time was ∼0.16 seconds per patient with AP or RT image, ∼0.33 seconds per patient with both of AP and RT images. Conclusion: The proposed method of treatment site recognition is efficient and accurate. It is not sensitive to the differences of image intensity, size and positions of patients in the portal images. It could be useful for the patient safety assurance. The work was partially supported by a research grant from Varian Medical System.« less
Radar image processing for rock-type discrimination
NASA Technical Reports Server (NTRS)
Blom, R. G.; Daily, M.
1982-01-01
Image processing and enhancement techniques for improving the geologic utility of digital satellite radar images are reviewed. Preprocessing techniques such as mean and variance correction on a range or azimuth line by line basis to provide uniformly illuminated swaths, median value filtering for four-look imagery to eliminate speckle, and geometric rectification using a priori elevation data. Examples are presented of application of preprocessing methods to Seasat and Landsat data, and Seasat SAR imagery was coregistered with Landsat imagery to form composite scenes. A polynomial was developed to distort the radar picture to fit the Landsat image of a 90 x 90 km sq grid, using Landsat color ratios with Seasat intensities. Subsequent linear discrimination analysis was employed to discriminate rock types from known areas. Seasat additions to the Landsat data improved rock identification by 7%.
Mulkern, Robert; Haker, Steven; Mamata, Hatsuho; Lee, Edward; Mitsouras, Dimitrios; Oshio, Koichi; Balasubramanian, Mukund; Hatabu, Hiroto
2014-03-01
Lung parenchyma is challenging to image with proton MRI. The large air space results in ~l/5th as many signal-generating protons compared to other organs. Air/tissue magnetic susceptibility differences lead to strong magnetic field gradients throughout the lungs and to broad frequency distributions, much broader than within other organs. Such distributions have been the subject of experimental and theoretical analyses which may reveal aspects of lung microarchitecture useful for diagnosis. Their most immediate relevance to current imaging practice is to cause rapid signal decays, commonly discussed in terms of short T 2 * values of 1 ms or lower at typical imaging field strengths. Herein we provide a brief review of previous studies describing and interpreting proton lung spectra. We then link these broad frequency distributions to rapid signal decays, though not necessarily the exponential decays generally used to define T 2 * values. We examine how these decays influence observed signal intensities and spatial mapping features associated with the most prominent torso imaging sequences, including spoiled gradient and spin echo sequences. Effects of imperfect refocusing pulses on the multiple echo signal decays in single shot fast spin echo (SSFSE) sequences and effects of broad frequency distributions on balanced steady state free precession (bSSFP) sequence signal intensities are also provided. The theoretical analyses are based on the concept of explicitly separating the effects of reversible and irreversible transverse relaxation processes, thus providing a somewhat novel and more general framework from which to estimate lung signal intensity behavior in modern imaging practice.
MULKERN, ROBERT; HAKER, STEVEN; MAMATA, HATSUHO; LEE, EDWARD; MITSOURAS, DIMITRIOS; OSHIO, KOICHI; BALASUBRAMANIAN, MUKUND; HATABU, HIROTO
2014-01-01
Lung parenchyma is challenging to image with proton MRI. The large air space results in ~l/5th as many signal-generating protons compared to other organs. Air/tissue magnetic susceptibility differences lead to strong magnetic field gradients throughout the lungs and to broad frequency distributions, much broader than within other organs. Such distributions have been the subject of experimental and theoretical analyses which may reveal aspects of lung microarchitecture useful for diagnosis. Their most immediate relevance to current imaging practice is to cause rapid signal decays, commonly discussed in terms of short T2* values of 1 ms or lower at typical imaging field strengths. Herein we provide a brief review of previous studies describing and interpreting proton lung spectra. We then link these broad frequency distributions to rapid signal decays, though not necessarily the exponential decays generally used to define T2* values. We examine how these decays influence observed signal intensities and spatial mapping features associated with the most prominent torso imaging sequences, including spoiled gradient and spin echo sequences. Effects of imperfect refocusing pulses on the multiple echo signal decays in single shot fast spin echo (SSFSE) sequences and effects of broad frequency distributions on balanced steady state free precession (bSSFP) sequence signal intensities are also provided. The theoretical analyses are based on the concept of explicitly separating the effects of reversible and irreversible transverse relaxation processes, thus providing a somewhat novel and more general framework from which to estimate lung signal intensity behavior in modern imaging practice. PMID:25228852
NASA Astrophysics Data System (ADS)
Hellebust, Anne; Rosbach, Kelsey; Wu, Jessica Keren; Nguyen, Jennifer; Gillenwater, Ann; Vigneswaran, Nadarajah; Richards-Kortum, Rebecca
2013-12-01
In this longitudinal study, a mouse model of 4-nitroquinoline 1-oxide chemically induced tongue carcinogenesis was used to assess the ability of optical imaging with exogenous and endogenous contrast to detect neoplastic lesions in a heterogeneous mucosal surface. Widefield autofluorescence and fluorescence images of intact 2-NBDG-stained and proflavine-stained tissues were acquired at multiple time points in the carcinogenesis process. Confocal fluorescence images of transverse fresh tissue slices from the same specimens were acquired to investigate how changes in tissue microarchitecture affect widefield fluorescence images of intact tissue. Widefield images were analyzed to develop and evaluate an algorithm to delineate areas of dysplasia and cancer. A classification algorithm for the presence of neoplasia based on the mean fluorescence intensity of 2-NBDG staining and the standard deviation of the fluorescence intensity of proflavine staining was found to separate moderate dysplasia, severe dysplasia, and cancer from non-neoplastic regions of interest with 91% sensitivity and specificity. Results suggest this combination of noninvasive optical imaging modalities can be used in vivo to discriminate non-neoplastic from neoplastic tissue in this model with the potential to translate this technology to the clinic.
Study on polarization image methods in turbid medium
NASA Astrophysics Data System (ADS)
Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong
2014-11-01
Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.
Statistical intensity variation analysis for rapid volumetric imaging of capillary network flux
Lee, Jonghwan; Jiang, James Y.; Wu, Weicheng; Lesage, Frederic; Boas, David A.
2014-01-01
We present a novel optical coherence tomography (OCT)-based technique for rapid volumetric imaging of red blood cell (RBC) flux in capillary networks. Previously we reported that OCT can capture individual RBC passage within a capillary, where the OCT intensity signal at a voxel fluctuates when an RBC passes the voxel. Based on this finding, we defined a metric of statistical intensity variation (SIV) and validated that the mean SIV is proportional to the RBC flux [RBC/s] through simulations and measurements. From rapidly scanned volume data, we used Hessian matrix analysis to vectorize a segment path of each capillary and estimate its flux from the mean of the SIVs gathered along the path. Repeating this process led to a 3D flux map of the capillary network. The present technique enabled us to trace the RBC flux changes over hundreds of capillaries with a temporal resolution of ~1 s during functional activation. PMID:24761298
Combustion behaviors of GO2/GH2 swirl-coaxial injector using non-intrusive optical diagnostics
NASA Astrophysics Data System (ADS)
GuoBiao, Cai; Jian, Dai; Yang, Zhang; NanJia, Yu
2016-06-01
This research evaluates the combustion behaviors of a single-element, swirl-coaxial injector in an atmospheric combustion chamber with gaseous oxygen and gaseous hydrogen (GO2/GH2) as the propellants. A brief simulated flow field schematic comparison between a shear-coaxial injector and the swirl-coaxial injector reveals the distribution characteristics of the temperature field and streamline patterns. Advanced optical diagnostics, i.e., OH planar laser-induced fluorescence and high-speed imaging, are simultaneously employed to determine the OH radical spatial distribution and flame fluctuations, respectively. The present study focuses on the flame structures under varying O/F mixing ratios and center oxygen swirl intensities. The combined use of several image-processing methods aimed at OH instantaneous images, including time-averaged, root-mean-square, and gradient transformation, provides detailed information regarding the distribution of the flow field. The results indicate that the shear layers anchored on the oxygen injector lip are the main zones of chemical heat release and that the O/F mixing ratio significantly affects the flame shape. Furthermore, with high-speed imaging, an intuitionistic ignition process and several consecutive steady-state images reveal that lean conditions make it easy to drive the combustion instabilities and that the center swirl intensity has a moderate influence on the flame oscillation strength. The results of this study provide a visualized analysis for future optimal swirl-coaxial injector designs.
[Research on Spectral Polarization Imaging System Based on Static Modulation].
Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng
2015-04-01
The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.
Cone-beam x-ray luminescence computed tomography based on x-ray absorption dosage.
Liu, Tianshuai; Rong, Junyan; Gao, Peng; Zhang, Wenli; Liu, Wenlei; Zhang, Yuanke; Lu, Hongbing
2018-02-01
With the advances of x-ray excitable nanophosphors, x-ray luminescence computed tomography (XLCT) has become a promising hybrid imaging modality. In particular, a cone-beam XLCT (CB-XLCT) system has demonstrated its potential in in vivo imaging with the advantage of fast imaging speed over other XLCT systems. Currently, the imaging models of most XLCT systems assume that nanophosphors emit light based on the intensity distribution of x-ray within the object, not completely reflecting the nature of the x-ray excitation process. To improve the imaging quality of CB-XLCT, an imaging model that adopts an excitation model of nanophosphors based on x-ray absorption dosage is proposed in this study. To solve the ill-posed inverse problem, a reconstruction algorithm that combines the adaptive Tikhonov regularization method with the imaging model is implemented for CB-XLCT reconstruction. Numerical simulations and phantom experiments indicate that compared with the traditional forward model based on x-ray intensity, the proposed dose-based model could improve the image quality of CB-XLCT significantly in terms of target shape, localization accuracy, and image contrast. In addition, the proposed model behaves better in distinguishing closer targets, demonstrating its advantage in improving spatial resolution. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Auroral Observations from the POLAR Ultraviolet Imager (UVI)
NASA Technical Reports Server (NTRS)
Germany, G. A.; Spann, J. F.; Parks, G. K.; Brittnacher, M. J.; Elsen, R.; Chen, L.; Lummerzheim, D.; Rees, M. H.
1998-01-01
Because of the importance of the auroral regions as a remote diagnostic of near-Earth plasma processes and magnetospheric structure, spacebased instrumentation for imaging the auroral regions have been designed and operated for the last twenty-five years. The latest generation of imagers, including those flown on the POLAR satellite, extends this quest for multispectral resolution by providing three separate imagers for the visible, ultraviolet, and X ray images of the aurora. The ability to observe extended regions allows imaging missions to significantly extend the observations available from in situ or groundbased instrumentation. The complementary nature of imaging and other observations is illustrated below using results from tile GGS Ultraviolet Imager (UVI). Details of the requisite energy and intensity analysis are also presented.
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.
2016-03-01
Cochlear Implants (CIs) are electrode arrays that are surgically inserted into the cochlea. Individual contacts stimulate frequency-mapped nerve endings thus replacing the natural electro-mechanical transduction mechanism. CIs are programmed post-operatively by audiologists but this is currently done using behavioral tests without imaging information that permits relating electrode position to inner ear anatomy. We have recently developed a series of image processing steps that permit the segmentation of the inner ear anatomy and the localization of individual contacts. We have proposed a new programming strategy that uses this information and we have shown in a study with 68 participants that 78% of long term recipients preferred the programming parameters determined with this new strategy. A limiting factor to the large scale evaluation and deployment of our technique is the amount of user interaction still required in some of the steps used in our sequence of image processing algorithms. One such step is the rough registration of an atlas to target volumes prior to the use of automated intensity-based algorithms when the target volumes have very different fields of view and orientations. In this paper we propose a solution to this problem. It relies on a random forest-based approach to automatically localize a series of landmarks. Our results obtained from 83 images with 132 registration tasks show that automatic initialization of an intensity-based algorithm proves to be a reliable technique to replace the manual step.
POLYSITE - An interactive package for the selection and refinement of Landsat image training sites
NASA Technical Reports Server (NTRS)
Mack, Marilyn J. P.
1986-01-01
A versatile multifunction package, POLYSITE, developed for Goddard's Land Analysis System, is described which simplifies the process of interactively selecting and correcting the sites used to study Landsat TM and MSS images. Image switching between the zoomed and nonzoomed image, color and shape cursor change and location display, and bit plane erase or color change, are global functions which are active at all times. Local functions possibly include manipulation of intensive study areas, new site definition, mensuration, and new image copying. The program is illustrated with the example of a full TM maser scene of metropolitan Washington, DC.
Rohlfing, Torsten; Schaupp, Frank; Haddad, Daniel; Brandt, Robert; Haase, Axel; Menzel, Randolf; Maurer, Calvin R
2005-01-01
Confocal microscopy (CM) is a powerful image acquisition technique that is well established in many biological applications. It provides 3-D acquisition with high spatial resolution and can acquire several different channels of complementary image information. Due to the specimen extraction and preparation process, however, the shapes of imaged objects may differ considerably from their in vivo appearance. Magnetic resonance microscopy (MRM) is an evolving variant of magnetic resonance imaging, which achieves microscopic resolutions using a high magnetic field and strong magnetic gradients. Compared to CM imaging, MRM allows for in situ imaging and is virtually free of geometrical distortions. We propose to combine the advantages of both methods by unwarping CM images using a MRM reference image. Our method incorporates a sequence of image processing operators applied to the MRM image, followed by a two-stage intensity-based registration to compute a nonrigid coordinate transformation between the CM images and the MRM image. We present results obtained using CM images from the brains of 20 honey bees and a MRM image of an in situ bee brain. Copyright 2005 Society of Photo-Optical Instrumentation Engineers.
Pseudo-color coding method for high-dynamic single-polarization SAR images
NASA Astrophysics Data System (ADS)
Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi
2018-04-01
A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.
Detection of a concealed object
Keller, Paul E [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA
2010-11-16
Disclosed are systems, methods, devices, and apparatus to determine if a clothed individual is carrying a suspicious, concealed object. This determination includes establishing data corresponding to an image of the individual through interrogation with electromagnetic radiation in the 200 MHz to 1 THz range. In one form, image data corresponding to intensity of reflected radiation and differential depth of the reflecting surface is received and processed to detect the suspicious, concealed object.
Detection of a concealed object
Keller, Paul E [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA
2008-04-29
Disclosed are systems, methods, devices, and apparatus to determine if a clothed individual is carrying a suspicious, concealed object. This determination includes establishing data corresponding to an image of the individual through interrogation with electromagnetic radiation in the 200 MHz to 1 THz range. In one form, image data corresponding to intensity of reflected radiation and differential depth of the reflecting surface is received and processed to detect the suspicious, concealed object.
NASA Astrophysics Data System (ADS)
Retheesh, R.; Ansari, Md. Zaheer; Radhakrishnan, P.; Mujeeb, A.
2018-03-01
This study demonstrates the feasibility of a view-based method, the motion history image (MHI) to map biospeckle activity around the scar region in a green orange fruit. The comparison of MHI with the routine intensity-based methods validated the effectiveness of the proposed method. The results show that MHI can be implementated as an alternative online image processing tool in the biospeckle analysis.
Ezquerro, Fernando; Moffa, Adriano H; Bikson, Marom; Khadka, Niranjan; Aparicio, Luana V M; de Sampaio-Junior, Bernardo; Fregni, Felipe; Bensenor, Isabela M; Lotufo, Paulo A; Pereira, Alexandre Costa; Brunoni, Andre R
2017-04-01
To evaluate whether and to which extent skin redness (erythema) affects investigator blinding in transcranial direct current stimulation (tDCS) trials. Twenty-six volunteers received sham and active tDCS, which was applied with saline-soaked sponges of different thicknesses. High-resolution skin images, taken before and 5, 15, and 30 min after stimulation, were randomized and presented to experienced raters who evaluated erythema intensity and judged on the likelihood of stimulation condition (sham vs. active). In addition, semi-automated image processing generated probability heatmaps and surface area coverage of erythema. Adverse events were also collected. Erythema was present, but less intense in sham compared to active groups. Erythema intensity was inversely and directly associated to correct sham and active stimulation group allocation, respectively. Our image analyses found that erythema also occurs after sham and its distribution is homogenous below electrodes. Tingling frequency was higher using thin compared to thick sponges, whereas erythema was more intense under thick sponges. Optimal investigator blinding is achieved when erythema after tDCS is mild. Erythema distribution under the electrode is patchy, occurs after sham tDCS and varies according to sponge thickness. We discuss methods to address skin erythema-related tDCS unblinding. © 2016 International Neuromodulation Society.
Research on the lesion segmentation of breast tumor MR images based on FCM-DS theory
NASA Astrophysics Data System (ADS)
Zhang, Liangbin; Ma, Wenjun; Shen, Xing; Li, Yuehua; Zhu, Yuemin; Chen, Li; Zhang, Su
2017-03-01
Magnetic resonance imaging (MRI) plays an important role in the treatment of breast tumor by high intensity focused ultrasound (HIFU). The doctors evaluate the scale, distribution and the statement of benign or malignancy of breast tumor by analyzing variety modalities of MRI, such as the T2, DWI and DCE images for making accurate preoperative treatment plan and evaluating the effect of the operation. This paper presents a method of lesion segmentation of breast tumor based on FCM-DS theory. Fuzzy c-means clustering (FCM) algorithm combined with Dempster-Shafer (DS) theory is used to process the uncertainty of information, segmenting the lesion areas on DWI and DCE modalities of MRI and reducing the scale of the uncertain parts. Experiment results show that FCM-DS can fuse the DWI and DCE images to achieve accurate segmentation and display the statement of benign or malignancy of lesion area by Time-Intensity Curve (TIC), which could be beneficial in making preoperative treatment plan and evaluating the effect of the therapy.
Application of image converter camera to measure flame propagation in S. I. engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakamura, A.; Ishii, K.; Sasaki, T.
1989-01-01
A combustion flame visualization system, for use as an engine diagnostics tool, was developed in order to evaluate combustion chamber shapes in the development stage of mass-produced spark ignition (S.I.) engines. The system consists of an image converter camera and a computer-aided image processing system. The system is capable of high speed photography (10,000 fps) at low intensity light (1,000 cd/m/sup 2/), and of real-time display of the raw images of combustion flames. By using this system, flame structure estimated from the brightness level on a photograph and direction of flame propagation in a mass-produced 4-valve engine were measured. Itmore » was observed that the difference in the structure and the propagation of the flame in the cases of 4-valve and quasi-2-valve combustion chambers, which has the same in the pressure diagram, were detected. The quasi-2-valve configuration was adopted in order to improve swirl intensity.« less
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
Efficiency analysis for 3D filtering of multichannel images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
Automatic anatomy recognition in whole-body PET/CT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Huiqian; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey
Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity ofmore » anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. Results: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. Conclusions: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.« less
Coherent diffraction surface imaging in reflection geometry.
Marathe, Shashidhara; Kim, S S; Kim, S N; Kim, Chan; Kang, H C; Nickles, P V; Noh, D Y
2010-03-29
We present a reflection based coherent diffraction imaging method which can be used to reconstruct a non periodic surface image from a diffraction amplitude measured in reflection geometry. Using a He-Ne laser, we demonstrated that a surface image can be reconstructed solely from the reflected intensity from a surface without relying on any prior knowledge of the sample object or the object support. The reconstructed phase image of the exit wave is particularly interesting since it can be used to obtain quantitative information of the surface depth profile or the phase change during the reflection process. We believe that this work will broaden the application areas of coherent diffraction imaging techniques using light sources with limited penetration depth.
Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.
2017-01-01
Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kemp, B.
2016-06-15
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kofler, J.
2016-06-15
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooley, R.
2016-06-15
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
An incompressible fluid flow model with mutual information for MR image registration
NASA Astrophysics Data System (ADS)
Tsai, Leo; Chang, Herng-Hua
2013-03-01
Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.
Lateral scattered light used to study laser light propagation in turbid media phantoms
NASA Astrophysics Data System (ADS)
Valdes, Claudia; Solarte, Efrain
2010-02-01
Laser light propagation in soft tissues is important because of the growing biomedical applications of lasers and the need to optically characterize the biological media. Following previous developments of the group, we have developed low cost models, Phantoms, of soft tissue. The process was developed in a clean room to avoid the medium contamination. Each model was characterized by measuring the refractive index, and spectral reflectance and transmittance. To study the laser light propagation, each model was illuminated with a clean beam of laser light, using sources such as He-Ne (632nm) and DPSSL (473 nm). Laterally scattered light was imaged and these images were digitally processed. We analyzed the intensity distribution of the scattered radiation in order to obtain details of the beam evolution in the medium. Line profiles taken from the intensity distribution surface allow measuring the beam spread, and to find expressions for the longitudinal (along the beam incident direction) and transversal (across the beam incident direction) intensities distributions. From these behaviors, the radiation penetration depth and the total coefficient of extinction have been determined. The multiple scattering effects were remarkable, especially for the low wavelength laser beam.
Qing, Zhao-shen; Ji, Bao-ping; Shi, Bo-lin; Zhu, Da-zhou; Tu, Zhen-hua; Zude, Manuela
2008-06-01
In the present study, improved laser-induced light backscattering imaging was studied regarding its potential for analyzing apple SSC and fruit flesh firmness. Images of the diffuse reflection of light on the fruit surface were obtained from Fuji apples using laser diodes emitting at five wavelength bands (680, 780, 880, 940 and 980 nm). Image processing algorithms were tested to correct for dissimilar equator and shape of fruit, and partial least squares (PLS) regression analysis was applied to calibrate on the fruit quality parameter. In comparison to the calibration based on corrected frequency with the models built by raw data, the former improved r from 0. 78 to 0.80 and from 0.87 to 0.89 for predicting SSC and firmness, respectively. Comparing models based on mean value of intensities with results obtained by frequency of intensities, the latter gave higher performance for predicting Fuji SSC and firmness. Comparing calibration for predicting SSC based on the corrected frequency of intensities and the results obtained from raw data set, the former improved root mean of standard error of prediction (RMSEP) from 1.28 degrees to 0.84 degrees Brix. On the other hand, in comparison to models for analyzing flesh firmness built by means of corrected frequency of intensities with the calibrations based on raw data, the former gave the improvement in RMSEP from 8.23 to 6.17 N x cm(-2).
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Rotation Covariant Image Processing for Biomedical Applications
Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255
Classification of melanoma lesions using sparse coded features and random forests
NASA Astrophysics Data System (ADS)
Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré
2016-03-01
Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.
NASA Astrophysics Data System (ADS)
Murakoshi, Dai; Hirota, Kazuhiro; Ishii, Hiroyasu; Hashimoto, Atsushi; Ebata, Tetsurou; Irisawa, Kaku; Wada, Takatsugu; Hayakawa, Toshiro; Itoh, Kenji; Ishihara, Miya
2018-02-01
Photoacoustic (PA) imaging technology is expected to be applied to clinical assessment for peripheral vascularity. We started a clinical evaluation with the prototype PA imaging system we recently developed. Prototype PA imaging system was composed with in-house Q-switched Alexandrite laser system which emits short-pulsed laser with 750 nm wavelength, handheld ultrasound transducer where illumination optics were integrated and signal processing for PA image reconstruction implemented in the clinical ultrasound (US) system. For the purpose of quantitative assessment of PA images, an image analyzing function has been developed and applied to clinical PA images. In this analyzing function, vascularity derived from PA signal intensity ranged for prescribed threshold was defined as a numerical index of vessel fulfillment and calculated for the prescribed region of interest (ROI). Skin surface was automatically detected by utilizing B-mode image acquired simultaneously with PA image. Skinsurface position is utilized to place the ROI objectively while avoiding unwanted signals such as artifacts which were imposed due to melanin pigment in the epidermal layer which absorbs laser emission and generates strong PA signals. Multiple images were available to support the scanned image set for 3D viewing. PA images for several fingers of patients with systemic sclerosis (SSc) were quantitatively assessed. Since the artifact region is trimmed off in PA images, the visibility of vessels with rather low PA signal intensity on the 3D projection image was enhanced and the reliability of the quantitative analysis was improved.
Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording
Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi
2014-01-01
OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577
High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces
NASA Astrophysics Data System (ADS)
Jiang, Hongzhi; Zhao, Huijie; Li, Xudong
2012-10-01
This paper presents a novel 3-D scanning technique for high-reflective surfaces based on phase-shifting fringe projection method. High dynamic range fringe acquisition (HDRFA) technique is developed to process the fringe images reflected from the shiny surfaces, and generates a synthetic fringe image by fusing the raw fringe patterns, acquired with different camera exposure time and the illumination fringe intensity from the projector. Fringe image fusion algorithm is introduced to avoid saturation and under-illumination phenomenon by choosing the pixels in the raw fringes with the highest fringe modulation intensity. A method of auto-selection of HDRFA parameters is developed and largely increases the measurement automation. The synthetic fringes have higher signal-to-noise ratio (SNR) under ambient light by optimizing HDRFA parameters. Experimental results show that the proposed technique can successfully measure objects with high-reflective surfaces and is insensitive to ambient light.
RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.
2016-02-01
We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.
A framework for small infrared target real-time visual enhancement
NASA Astrophysics Data System (ADS)
Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin
2015-03-01
This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.
NASA Astrophysics Data System (ADS)
Jamlongkul, P.; Wannawichian, S.
2017-12-01
Earth's aurora in low latitude region was studied via time variations of oxygen emission spectra, simultaneously with solar wind data. The behavior of spectrum intensity, in corresponding with solar wind condition, could be a trace of aurora in low latitude region including some effects of high energetic auroral particles. Oxygen emission spectral lines were observed by Medium Resolution Echelle Spectrograph (MRES) at 2.4-m diameter telescope at Thai National Observatory, Inthanon Mountain, Chiang Mai, Thailand, during 1-5 LT on 5 and 6 February 2017. The observed spectral lines were calibrated via Dech95 - 2D image processing program and Dech-Fits spectra processing program for spectrum image processing and spectrum wavelength calibration, respectively. The variations of observed intensities each day were compared with solar wind parameters, which are magnitude of IMF (|BIMF|) including IMF in RTN coordinate (BR, BT, BN), ion density (ρ), plasma flow pressure (P), and speed (v). The correlation coefficients between oxygen spectral emissions and different solar wind parameters were found to vary in both positive and negative behaviors.
Theoretical Studies of a Transient Stimulated Raman Amplifier
1988-04-19
follows: I. contour plot of pump intensity . 1. sections of pump intensity 2. sections of pump phase 3. sections of pump amplitude (real/ imag ) I...contour plot of pump FFT intensity 4. sections of pump FFT intensity 5. sections of pump FFT phase 6. sections of pump FFT amplitude (real/ imag ) II...contour plot of Stokes intensity 7. sections of Stokes intensity 8. sections of Stokes phase 9. sections of Stokes amplitude (real/ imag ) IV. contour plot
Quantifying facial expression signal and intensity use during development.
Rodger, Helen; Lao, Junpeng; Caldara, Roberto
2018-06-12
Behavioral studies investigating facial expression recognition during development have applied various methods to establish by which age emotional expressions can be recognized. Most commonly, these methods employ static images of expressions at their highest intensity (apex) or morphed expressions of different intensities, but they have not previously been compared. Our aim was to (a) quantify the intensity and signal use for recognition of six emotional expressions from early childhood to adulthood and (b) compare both measures and assess their functional relationship to better understand the use of different measures across development. Using a psychophysical approach, we isolated the quantity of signal necessary to recognize an emotional expression at full intensity and the quantity of expression intensity (using neutral expression image morphs of varying intensities) necessary for each observer to recognize the six basic emotions while maintaining performance at 75%. Both measures revealed that fear and happiness were the most difficult and easiest expressions to recognize across age groups, respectively, a pattern already stable during early childhood. The quantity of signal and intensity needed to recognize sad, angry, disgust, and surprise expressions decreased with age. Using a Bayesian update procedure, we then reconstructed the response profiles for both measures. This analysis revealed that intensity and signal processing are similar only during adulthood and, therefore, cannot be straightforwardly compared during development. Altogether, our findings offer novel methodological and theoretical insights and tools for the investigation of the developing affective system. Copyright © 2018 Elsevier Inc. All rights reserved.
WE-G-18C-05: Characterization of Cross-Vendor, Cross-Field Strength MR Image Intensity Variations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, E; Prah, D
2014-06-15
Purpose: Variations in MR image intensity and image intensity nonuniformity (IINU) can challenge the accuracy of intensity-based image segmentation and registration algorithms commonly applied in radiotherapy. The goal of this work was to characterize MR image intensity variations across scanner vendors and field strengths commonly used in radiotherapy. Methods: ACR-MRI phantom images were acquired at 1.5T and 3.0T on GE (450w and 750, 23.1), Siemens (Espree and Verio, VB17B), and Philips (Ingenia, 4.1.3) scanners using commercial spin-echo sequences with matched parameters (TE/TR: 20/500 ms, rBW: 62.5 kHz, TH/skip: 5/5mm). Two radiofrequency (RF) coil combinations were used for each scanner: bodymore » coil alone, and combined body and phased-array head coils. Vendorspecific B1- corrections (PURE/Pre-Scan Normalize/CLEAR) were applied in all head coil cases. Images were transferred offline, corrected for IINU using the MNI N3 algorithm, and normalized. Coefficients of variation (CV=σ/μ) and peak image uniformity (PIU = 1−(Smax−Smin)/(Smax+Smin)) estimates were calculated for one homogeneous phantom slice. Kruskal-Wallis and Wilcoxon matched-pairs tests compared mean MR signal intensities and differences between original and N3 image CV and PIU. Results: Wide variations in both MR image intensity and IINU were observed across scanner vendors, field strengths, and RF coil configurations. Applying the MNI N3 correction for IINU resulted in significant improvements in both CV and PIU (p=0.0115, p=0.0235). However, wide variations in overall image intensity persisted, requiring image normalization to improve consistency across vendors, field strengths, and RF coils. These results indicate that B1- correction routines alone may be insufficient in compensating for IINU and image scaling, warranting additional corrections prior to use of MR images in radiotherapy. Conclusions: MR image intensities and IINU vary as a function of scanner vendor, field strength, and RF coil configuration. A two-step strategy consisting of MNI N3 correction followed by normalization was required to improve MR image consistency. Funding provided by Advancing a Healthier Wisconsin.« less
Method and apparatus for acoustic imaging of objects in water
Deason, Vance A.; Telschow, Kenneth L.
2005-01-25
A method, system and underwater camera for acoustic imaging of objects in water or other liquids includes an acoustic source for generating an acoustic wavefront for reflecting from a target object as a reflected wavefront. The reflected acoustic wavefront deforms a screen on an acoustic side and correspondingly deforms the opposing optical side of the screen. An optical processing system is optically coupled to the optical side of the screen and converts the deformations on the optical side of the screen into an optical intensity image of the target object.
Thermal imaging of afterburning plumes
NASA Astrophysics Data System (ADS)
Ajdari, E.; Gutmark, E.; Parr, T. P.; Wilson, K. J.; Schadow, K. C.
1989-01-01
Afterburning and nonafterburning exhaust plumes were studied experimentally for underexpanded sonic and supersonic conical circular nozzles. The plume structure was visualized using thermal imaging camera and regular photography. IR emission by the plume is mainly dependent on the presence of afterburning. Temperature and reducing power of the exhaust gases, in addition to the nozzle configuration, determine the structure of the plume core, the location where the afterburning is initiated, its size and intensity. Comparison between single shot and average thermal images of the plume show that afterburning is a highly turbulent combustion process.
Smartphone-based low light detection for bioluminescence application
USDA-ARS?s Scientific Manuscript database
We report a smartphone-based device and associated imaging-processing algorithm to maximize the sensitivity of standard smartphone cameras, that can detect the presence of single-digit pW of radiant flux intensity. The proposed hardware and software, called bioluminescent-based analyte quantitation ...
NASA Astrophysics Data System (ADS)
Neuhoff, John G.
2003-04-01
Increasing acoustic intensity is a primary cue to looming auditory motion. Perceptual overestimation of increasing intensity could provide an evolutionary selective advantage by specifying that an approaching sound source is closer than actual, thus affording advanced warning and more time than expected to prepare for the arrival of the source. Here, multiple lines of converging evidence for this evolutionary hypothesis are presented. First, it is shown that intensity change specifying accelerating source approach changes in loudness more than equivalent intensity change specifying decelerating source approach. Second, consistent with evolutionary hunter-gatherer theories of sex-specific spatial abilities, it is shown that females have a significantly larger bias for rising intensity than males. Third, using functional magnetic resonance imaging in conjunction with approaching and receding auditory motion, it is shown that approaching sources preferentially activate a specific neural network responsible for attention allocation, motor planning, and translating perception into action. Finally, it is shown that rhesus monkeys also exhibit a rising intensity bias by orienting longer to looming tones than to receding tones. Together these results illustrate an adaptive perceptual bias that has evolved because it provides a selective advantage in processing looming acoustic sources. [Work supported by NSF and CDC.
Intensity standardisation of 7T MR images for intensity-based segmentation of the human hypothalamus
Schreiber, Jan; Bazin, Pierre-Louis; Trampel, Robert; Anwander, Alfred; Geyer, Stefan; Schönknecht, Peter
2017-01-01
The high spatial resolution of 7T MRI enables us to identify subtle volume changes in brain structures, providing potential biomarkers of mental disorders. Most volumetric approaches require that similar intensity values represent similar tissue types across different persons. By applying colour-coding to T1-weighted MP2RAGE images, we found that the high measurement accuracy achieved by high-resolution imaging may be compromised by inter-individual variations in the image intensity. To address this issue, we analysed the performance of five intensity standardisation techniques in high-resolution T1-weighted MP2RAGE images. Twenty images with extreme intensities in the GM and WM were standardised to a representative reference image. We performed a multi-level evaluation with a focus on the hypothalamic region—analysing the intensity histograms as well as the actual MR images, and requiring that the correlation between the whole-brain tissue volumes and subject age be preserved during standardisation. The results were compared with T1 maps. Linear standardisation using subcortical ROIs of GM and WM provided good results for all evaluation criteria: it improved the histogram alignment within the ROIs and the average image intensity within the ROIs and the whole-brain GM and WM areas. This method reduced the inter-individual intensity variation of the hypothalamic boundary by more than half, outperforming all other methods, and kept the original correlation between the GM volume and subject age intact. Mixed results were obtained for the other four methods, which sometimes came at the expense of unwarranted changes in the age-related pattern of the GM volume. The mapping of the T1 relaxation time with the MP2RAGE sequence is advertised as being especially robust to bias field inhomogeneity. We found little evidence that substantiated the T1 map’s theoretical superiority over the T1-weighted images regarding the inter-individual image intensity homogeneity. PMID:28253330
Color pictorial serpentine halftone for secure embedded data
NASA Astrophysics Data System (ADS)
Curry, Douglas N.
1998-04-01
This paper introduces a new rotatable glyph shape for trusted printing applications that has excellent image rendering, data storage and counterfeit deterrence properties. Referred to as a serpentine because it tiles into a meandering line screen, it can produce high quality images independent of its ability to embed data. The hafltone cell is constructed with hyperbolic curves to enhance its dynamic range, and generates low distortion because of rotational tone invariance with its neighbors. An extension to the process allows the data to be formatted into human readable text patterns, viewable with a magnifying glass, and therefore not requiring input scanning. The resultant embedded halftone patterns can be recognized as simple numbers (0 - 9) or alphanumerics (a - z). The pattern intensity can be offset from the surrounding image field intensity, producing a watermarking effect. We have been able to embed words such as 'original' or license numbers into the background halftone pattern of images which can be readily observed in the original image, and which conveniently disappear upon copying. We have also embedded data blocks with self-clocking codes and error correction data which are machine-readable. Finally, we have successfully printed full color images with both the embedded data and text, simulating a trusted printing application.
Agte, Silke; Savvinov, Alexey; Karl, Anett; Zayas-Santiago, Astrid; Ulbricht, Elke; Makarov, Vladimir I; Reichenbach, Andreas; Bringmann, Andreas; Skatchkov, Serguei N
2018-05-16
In this study, we show the capability of Müller glial cells to transport light through the inverted retina of reptiles, specifically the retina of the spectacled caimans. Thus, confirming that Müller cells of lower vertebrates also improve retinal light transmission. Confocal imaging of freshly isolated retinal wholemounts, that preserved the refractive index landscape of the tissue, indicated that the retina of the spectacled caiman is adapted for vision under dim light conditions. For light transmission experiments, we used a setup with two axially aligned objectives imaging the retina from both sides to project the light onto the inner (vitreal) surface and to detect the transmitted light behind the retina at the receptor layer. Simultaneously, a confocal microscope obtained images of the Müller cells embedded within the vital tissue. Projections of light onto several representative Müller cell trunks within the inner plexiform layer, i.e. (i) trunks with a straight orientation, (ii) trunks which are formed by the inner processes and (iii) trunks which get split into inner processes, were associated with increases in the intensity of the transmitted light. Projections of light onto the periphery of the Müller cell endfeet resulted in a lower intensity of transmitted light. In this way, retinal glial (Müller) cells support dim light vision by improving the signal-to-noise ratio which increases the sensitivity to light. The field of illuminated photoreceptors mainly include rods reflecting the rod dominance of the of tissue. A subpopulation of Müller cells with downstreaming cone cells led to a high-intensity illumination of the cones, while the surrounding rods were illuminated by light of lower intensity. Therefore, Müller cells that lie in front of cones may adapt the intensity of the transmitted light to the different sensitivities of cones and rods, presumably allowing a simultaneous vision with both receptor types under dim light conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
Wen, Xuejiao; Qiu, Xiaolan; Han, Bing; Ding, Chibiao; Lei, Bin; Chen, Qi
2018-05-07
Range ambiguity is one of the factors which affect the SAR image quality. Alternately transmitting up and down chirp modulation pulses is one of the methods used to suppress the range ambiguity. However, the defocusing range ambiguous signal can still hold the stronger backscattering intensity than the mainlobe imaging area in some case, which has a severe impact on visual effects and subsequent applications. In this paper, a novel hybrid range ambiguity suppression method for up and down chirp modulation is proposed. The method can obtain the ambiguity area image and reduce the ambiguity signal power appropriately, by applying pulse compression using a contrary modulation rate and CFAR detecting method. The effectiveness and correctness of the approach is demonstrated by processing the archive images acquired by Chinese Gaofen-3 SAR sensor in full-polarization mode.
Techniques to derive geometries for image-based Eulerian computations
Dillard, Seth; Buchholz, James; Vigmostad, Sarah; Kim, Hyunggun; Udaykumar, H.S.
2014-01-01
Purpose The performance of three frequently used level set-based segmentation methods is examined for the purpose of defining features and boundary conditions for image-based Eulerian fluid and solid mechanics models. The focus of the evaluation is to identify an approach that produces the best geometric representation from a computational fluid/solid modeling point of view. In particular, extraction of geometries from a wide variety of imaging modalities and noise intensities, to supply to an immersed boundary approach, is targeted. Design/methodology/approach Two- and three-dimensional images, acquired from optical, X-ray CT, and ultrasound imaging modalities, are segmented with active contours, k-means, and adaptive clustering methods. Segmentation contours are converted to level sets and smoothed as necessary for use in fluid/solid simulations. Results produced by the three approaches are compared visually and with contrast ratio, signal-to-noise ratio, and contrast-to-noise ratio measures. Findings While the active contours method possesses built-in smoothing and regularization and produces continuous contours, the clustering methods (k-means and adaptive clustering) produce discrete (pixelated) contours that require smoothing using speckle-reducing anisotropic diffusion (SRAD). Thus, for images with high contrast and low to moderate noise, active contours are generally preferable. However, adaptive clustering is found to be far superior to the other two methods for images possessing high levels of noise and global intensity variations, due to its more sophisticated use of local pixel/voxel intensity statistics. Originality/value It is often difficult to know a priori which segmentation will perform best for a given image type, particularly when geometric modeling is the ultimate goal. This work offers insight to the algorithm selection process, as well as outlining a practical framework for generating useful geometric surfaces in an Eulerian setting. PMID:25750470
Single image super-resolution via an iterative reproducing kernel Hilbert space method.
Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu
2016-11-01
Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.
Robust algebraic image enhancement for intelligent control systems
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morrelli, Michael
1993-01-01
Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.
Chen, Kuo-mei; Chen, Yu-wei
2011-04-07
For photo-initiated inelastic and reactive collisions, dynamic information can be extracted from central sliced images of state-selected Newton spheres of product species. An analysis framework has been established to determine differential cross sections and the kinetic energy release of co-products from experimental images. When one of the reactants exhibits a high recoil speed in a photo-initiated dynamic process, the present theory can be employed to analyze central sliced images from ion imaging or three-dimensional sliced fluorescence imaging experiments. It is demonstrated that the differential cross section of a scattering process can be determined from the central sliced image by a double Legendre moment analysis, for either a fixed or continuously distributed recoil speeds in the center-of-mass reference frame. Simultaneous equations which lead to the determination of the kinetic energy release of co-products can be established from the second-order Legendre moment of the experimental image, as soon as the differential cross section is extracted. The intensity distribution of the central sliced image, along with its outer and inner ring sizes, provide all the clues to decipher the differential cross section and the kinetic energy release of co-products.
Three-beam interferogram analysis method for surface flatness testing of glass plates and wedges
NASA Astrophysics Data System (ADS)
Sunderland, Zofia; Patorski, Krzysztof
2015-09-01
When testing transparent plates with high quality flat surfaces and a small angle between them the three-beam interference phenomenon is observed. Since the reference beam and the object beams reflected from both the front and back surface of a sample are detected, the recorded intensity distribution may be regarded as a sum of three fringe patterns. Images of that type cannot be succesfully analyzed with standard interferogram analysis methods. They contain, however, useful information on the tested plate surface flatness and its optical thickness variations. Several methods were elaborated to decode the plate parameters. Our technique represents a competitive solution which allows for retrieval of phase components of the three-beam interferogram. It requires recording two images: a three-beam interferogram and the two-beam one with the reference beam blocked. Mutually subtracting these images leads to the intensity distribution which, under some assumptions, provides access to the two component fringe sets which encode surfaces flatness. At various stages of processing we take advantage of nonlinear operations as well as single-frame interferogram analysis methods. Two-dimensional continuous wavelet transform (2D CWT) is used to separate a particular fringe family from the overall interferogram intensity distribution as well as to estimate the phase distribution from a pattern. We distinguish two processing paths depending on the relative density of fringe sets which is connected with geometry of a sample and optical setup. The proposed method is tested on simulated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J; Dossa, D; Gokhale, M
Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less
Lee, Heung-Rae
1997-01-01
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object.
NASA Astrophysics Data System (ADS)
Mori, Shinichiro; Kanematsu, Nobuyuki; Asakura, Hiroshi; Endo, Masahiro
2007-02-01
The concept of internal target volume (ITV) is highly significant in radiotherapy for the lung, an organ which is hampered by organ motion. To date, different methods to obtain the ITV have been published and are therefore available. To define ITV, we developed a new method by adapting a time filter to the four-dimensional CT scan technique (4DCT) which is projection-data processing (4D projection data maximum attenuation (4DPM)), and compared it with reconstructed image processing (4D image maximum intensity projection (4DIM)) using a phantom and clinical evaluations. 4DIM and 4DPM captured accurate maximum intensity volume (MIV), that is tumour encompassing volume, easily. Although 4DIM increased the CT number 1.8 times higher than 4DPM, 4DPM provided the original tumour CT number for MIV via a reconstruction algorithm. In the patient with lung fibrosis honeycomb, the MIV with 4DIM is 0.7 cm larger than that for cine imaging in the cranio-caudal direction. 4DPM therefore provided an accurate MIV independent of patient characteristics and reconstruction conditions. These findings indicate the usefulness of 4DPM in determining ITV in radiotherapy.
Larimer, Curtis; Winder, Eric; Jeters, Robert; Prowant, Matthew; Nettleship, Ian; Addleman, Raymond Shane; Bonheyo, George T
2016-01-01
The accumulation of bacteria in surface-attached biofilms can be detrimental to human health, dental hygiene, and many industrial processes. Natural biofilms are soft and often transparent, and they have heterogeneous biological composition and structure over micro- and macroscales. As a result, it is challenging to quantify the spatial distribution and overall intensity of biofilms. In this work, a new method was developed to enhance the visibility and quantification of bacterial biofilms. First, broad-spectrum biomolecular staining was used to enhance the visibility of the cells, nucleic acids, and proteins that make up biofilms. Then, an image analysis algorithm was developed to objectively and quantitatively measure biofilm accumulation from digital photographs and results were compared to independent measurements of cell density. This new method was used to quantify the growth intensity of Pseudomonas putida biofilms as they grew over time. This method is simple and fast, and can quantify biofilm growth over a large area with approximately the same precision as the more laborious cell counting method. Stained and processed images facilitate assessment of spatial heterogeneity of a biofilm across a surface. This new approach to biofilm analysis could be applied in studies of natural, industrial, and environmental biofilms.
Automatic detection of zebra crossings from mobile LiDAR data
NASA Astrophysics Data System (ADS)
Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.
2015-07-01
An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.
Medical diagnosis imaging systems: image and signal processing applications aided by fuzzy logic
NASA Astrophysics Data System (ADS)
Hata, Yutaka
2010-04-01
First, we describe an automated procedure for segmenting an MR image of a human brain based on fuzzy logic for diagnosing Alzheimer's disease. The intensity thresholds for segmenting the whole brain of a subject are automatically determined by finding the peaks of the intensity histogram. After these thresholds are evaluated in a region growing, the whole brain can be identified. Next, we describe a procedure for decomposing the obtained whole brain into the left and right cerebral hemispheres, the cerebellum and the brain stem. Our method then identified the whole brain, the left cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem. Secondly, we describe a transskull sonography system that can visualize the shape of the skull and brain surface from any point to examine skull fracture and some brain diseases. We employ fuzzy signal processing to determine the skull and brain surface. The phantom model, the animal model with soft tissue, the animal model with brain tissue, and a human subjects' forehead is applied in our system. The all shapes of the skin surface, skull surface, skull bottom, and brain tissue surface are successfully determined.
NASA Astrophysics Data System (ADS)
Miccoli, M.; Usai, A.; Tafuto, A.; Albertoni, A.; Togna, F.
2016-10-01
The propagation environment around airborne platforms may significantly degrade the performance of Electro-Optical (EO) self-protection systems installed onboard. To ensure the sufficient level of protection, it is necessary to understand that are the best sensors/effectors installation positions to guarantee that the aeromechanical turbulence, generated by the engine exhausts and the rotor downwash, does not interfere with the imaging systems normal operations. Since the radiation-propagation-in-turbulence is a hardly predictable process, it was proposed a high-level approach in which, instead of studying the medium under turbulence, the turbulence effects on the imaging systems processing are assessed by means of an equivalent statistical model representation, allowing a definition of a Turbulence index to classify different level of turbulence intensities. Hence, a general measurement methodology for the degradation of the imaging systems performance in turbulence conditions was developed. The analysis of the performance degradation started by evaluating the effects of turbulences with a given index on the image processing chain (i.e., thresholding, blob analysis). The processing in turbulence (PIT) index is then derived by combining the effects of the given turbulence on the different image processing primitive functions. By evaluating the corresponding PIT index for a sufficient number of testing directions, it is possible to map the performance degradation around the aircraft installation for a generic imaging system, and to identify the best installation position for sensors/effectors composing the EO self-protection suite.
Images as embedding maps and minimal surfaces: Movies, color, and volumetric medical images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimmel, R.; Malladi, R.; Sochen, N.
A general geometrical framework for image processing is presented. The authors consider intensity images as surfaces in the (x,I) space. The image is thereby a two dimensional surface in three dimensional space for gray level images. The new formulation unifies many classical schemes, algorithms, and measures via choices of parameters in a {open_quote}master{close_quotes} geometrical measure. More important, it is a simple and efficient tool for the design of natural schemes for image enhancement, segmentation, and scale space. Here the authors give the basic motivation and apply the scheme to enhance images. They present the concept of an image as amore » surface in dimensions higher than the three dimensional intuitive space. This will help them handle movies, color, and volumetric medical images.« less
Image Alignment for Multiple Camera High Dynamic Range Microscopy.
Eastwood, Brian S; Childs, Elisabeth C
2012-01-09
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.
Image Alignment for Multiple Camera High Dynamic Range Microscopy
Eastwood, Brian S.; Childs, Elisabeth C.
2012-01-01
This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028
NASA Astrophysics Data System (ADS)
Cruz, Francisco; Sevilla, Raquel; Zhu, Joe; Vanko, Amy; Lee, Jung Hoon; Dogdas, Belma; Zhang, Weisheng
2014-03-01
Bone mineral density (BMD) obtained from a CT image is an imaging biomarker used pre-clinically for characterizing the Rheumatoid arthritis (RA) phenotype. We use this biomarker in animal studies for evaluating disease progression and for testing various compounds. In the current setting, BMD measurements are obtained manually by selecting the regions of interest from three-dimensional (3-D) CT images of rat legs, which results in a laborious and low-throughput process. Combining image processing techniques, such as intensity thresholding and skeletonization, with mathematical techniques in curve fitting and curvature calculations, we developed an algorithm for quick, consistent, and automatic detection of joints in large CT data sets. The implemented algorithm has reduced analysis time for a study with 200 CT images from 10 days to 3 days and has improved the robust detection of the obtained regions of interest compared with manual segmentation. This algorithm has been used successfully in over 40 studies.
Combined FLIM and reflectance confocal microscopy for epithelial imaging
NASA Astrophysics Data System (ADS)
Jabbour, Joey M.; Cheng, Shuna; Shrestha, Sebina; Malik, Bilal; Jo, Javier A.; Applegate, Brian; Maitland, Kristen C.
2012-03-01
Current methods for detection of oral cancer lack the ability to delineate between normal and precancerous tissue with adequate sensitivity and specificity. The usual diagnostic mechanism involves visual inspection and palpation followed by tissue biopsy and histopathology, a process both invasive and time-intensive. A more sensitive and objective screening method can greatly facilitate the overall process of detection of early cancer. To this end, we present a multimodal imaging system with fluorescence lifetime imaging (FLIM) for wide field of view guidance and reflectance confocal microscopy for sub-cellular resolution imaging of epithelial tissue. Moving from a 12 x 12 mm2 field of view with 157 ìm lateral resolution using FLIM to 275 x 200 μm2 with lateral resolution of 2.2 μm using confocal microscopy, hamster cheek pouch model is imaged both in vivo and ex vivo. The results indicate that our dual modality imaging system can identify and distinguish between different tissue features, and, therefore, can potentially serve as a guide in early oral cancer detection..
Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method
NASA Astrophysics Data System (ADS)
Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami
We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
Color line scan camera technology and machine vision: requirements to consider
NASA Astrophysics Data System (ADS)
Paernaenen, Pekka H. T.
1997-08-01
Color machine vision has shown a dynamic uptrend in use within the past few years as the introduction of new cameras and scanner technologies itself underscores. In the future, the movement from monochrome imaging to color will hasten, as machine vision system users demand more knowledge about their product stream. As color has come to the machine vision, certain requirements for the equipment used to digitize color images are needed. Color machine vision needs not only a good color separation but also a high dynamic range and a good linear response from the camera used. Good dynamic range and linear response is necessary for color machine vision. The importance of these features becomes even more important when the image is converted to another color space. There is always lost some information when converting integer data to another form. Traditionally the color image processing has been much slower technique than the gray level image processing due to the three times greater data amount per image. The same has applied for the three times more memory needed. The advancements in computers, memory and processing units has made it possible to handle even large color images today cost efficiently. In some cases he image analysis in color images can in fact even be easier and faster than with a similar gray level image because of more information per pixel. Color machine vision sets new requirements for lighting, too. High intensity and white color light is required in order to acquire good images for further image processing or analysis. New development in lighting technology is bringing eventually solutions for color imaging.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans
2017-04-01
Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-01-01
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, feature extraction algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system. PMID:29462855
Enhanced Automated Guidance System for Horizontal Auger Boring Based on Image Processing.
Wu, Lingling; Wen, Guojun; Wang, Yudan; Huang, Lei; Zhou, Jiang
2018-02-15
Horizontal auger boring (HAB) is a widely used trenchless technology for the high-accuracy installation of gravity or pressure pipelines on line and grade. Differing from other pipeline installations, HAB requires a more precise and automated guidance system for use in a practical project. This paper proposes an economic and enhanced automated optical guidance system, based on optimization research of light-emitting diode (LED) light target and five automated image processing bore-path deviation algorithms. An LED light target was optimized for many qualities, including light color, filter plate color, luminous intensity, and LED layout. The image preprocessing algorithm, direction location algorithm, angle measurement algorithm, deflection detection algorithm, and auto-focus algorithm, compiled in MATLAB, are used to automate image processing for deflection computing and judging. After multiple indoor experiments, this guidance system is applied in a project of hot water pipeline installation, with accuracy controlled within 2 mm in 48-m distance, providing accurate line and grade controls and verifying the feasibility and reliability of the guidance system.
Analysis of intensity variability in multislice and cone beam computed tomography.
Nackaerts, Olivia; Maes, Frederik; Yan, Hua; Couto Souza, Paulo; Pauwels, Ruben; Jacobs, Reinhilde
2011-08-01
The aim of this study was to evaluate the variability of intensity values in cone beam computed tomography (CBCT) imaging compared with multislice computed tomography Hounsfield units (MSCT HU) in order to assess the reliability of density assessments using CBCT images. A quality control phantom was scanned with an MSCT scanner and five CBCT scanners. In one CBCT scanner, the phantom was scanned repeatedly in the same and in different positions. Images were analyzed using registration to a mathematical model. MSCT images were used as a reference. Density profiles of MSCT showed stable HU values, whereas in CBCT imaging the intensity values were variable over the profile. Repositioning of the phantom resulted in large fluctuations in intensity values. The use of intensity values in CBCT images is not reliable, because the values are influenced by device, imaging parameters and positioning. © 2011 John Wiley & Sons A/S.
DOT National Transportation Integrated Search
1997-01-01
The rational allocation of pavement maintenance resources requires the periodic assessment of the condition of all pavements. Traditional manual pavement distress surveys, which are based on visual inspection, are labor intensive, slow, and expensive...
WE-G-209-00: Identifying Image Artifacts, Their Causes, and How to Fix Them
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
Matsunaga, Tomoko M; Ogawa, Daisuke; Taguchi-Shiobara, Fumio; Ishimoto, Masao; Matsunaga, Sachihiro; Habu, Yoshiki
2017-06-01
Leaf color is an important indicator when evaluating plant growth and responses to biotic/abiotic stress. Acquisition of images by digital cameras allows analysis and long-term storage of the acquired images. However, under field conditions, where light intensity can fluctuate and other factors (shade, reflection, and background, etc.) vary, stable and reproducible measurement and quantification of leaf color are hard to achieve. Digital scanners provide fixed conditions for obtaining image data, allowing stable and reliable comparison among samples, but require detached plant materials to capture images, and the destructive processes involved often induce deformation of plant materials (curled leaves and faded colors, etc.). In this study, by using a lightweight digital scanner connected to a mobile computer, we obtained digital image data from intact plant leaves grown in natural-light greenhouses without detaching the targets. We took images of soybean leaves infected by Xanthomonas campestris pv. glycines , and distinctively quantified two disease symptoms (brown lesions and yellow halos) using freely available image processing software. The image data were amenable to quantitative and statistical analyses, allowing precise and objective evaluation of disease resistance.
Milles, Julien; Zhu, Yue Min; Gimenez, Gérard; Guttmann, Charles R G; Magnin, Isabelle E
2007-03-01
A novel approach for correcting intensity nonuniformity in magnetic resonance imaging (MRI) is presented. This approach is based on the simultaneous use of spatial and gray-level histogram information. Spatial information about intensity nonuniformity is obtained using cubic B-spline smoothing. Gray-level histogram information of the image corrupted by intensity nonuniformity is exploited from a frequential point of view. The proposed correction method is illustrated using both physical phantom and human brain images. The results are consistent with theoretical prediction, and demonstrate a new way of dealing with intensity nonuniformity problems. They are all the more significant as the ground truth on intensity nonuniformity is unknown in clinical images.
Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong
2013-12-07
The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lance, C.; Eather, R.
1993-09-30
A low-light-level monochromatic imaging system was designed and fabricated which was optimized to detect and record optical emissions associated with high-power rf heating of the ionosphere. The instrument is capable of detecting very low intensities, of the order of 1 Rayleigh, from typical ionospheric atomic and molecular emissions. This is achieved through co-adding of ON images during heater pulses and subtraction of OFF (background) images between pulses. Images can be displayed and analyzed in real time and stored in optical disc for later analysis. Full image processing software is provided which was customized for this application and uses menu ormore » mouse user interaction.« less
NASA Astrophysics Data System (ADS)
Liang, Shanshan; Saidi, Arya; Jing, Joe; Liu, Gangjun; Li, Jiawen; Zhang, Jun; Sun, Changsen; Narula, Jagat; Chen, Zhongping
2012-07-01
We developed a multimodality fluorescence and optical coherence tomography probe based on a double-clad fiber (DCF) combiner. The probe is composed of a DCF combiner, grin lens, and micromotor in the distal end. An integrated swept-source optical coherence tomography and fluorescence intensity imaging system was developed based on the combined probe for the early diagnoses of atherosclerosis. This system is capable of real-time data acquisition and processing as well as image display. For fluorescence imaging, the inflammation of atherosclerosis and necrotic core formed with the annexin V-conjugated Cy5.5 were imaged. Ex vivo imaging of New Zealand white rabbit arteries demonstrated the capability of the combined system.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
Acousto-optic laser projection systems for displaying TV information
NASA Astrophysics Data System (ADS)
Gulyaev, Yu V.; Kazaryan, M. A.; Mokrushin, Yu M.; Shakin, O. V.
2015-04-01
This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulators and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation.
Zaitsev, Vladimir Y; Matveyev, Alexandr L; Matveev, Lev A; Gelikonov, Grigory V; Gelikonov, Valentin M; Vitkin, Alex
2015-07-01
Feasibility of speckle tracking in optical coherence tomography (OCT) based on digital image correlation (DIC) is discussed in the context of elastography problems. Specifics of applying DIC methods to OCT, compared to processing of photographic images in mechanical engineering applications, are emphasized and main complications are pointed out. Analytical arguments are augmented by accurate numerical simulations of OCT speckle patterns. In contrast to DIC processing for displacement and strain estimation in photographic images, the accuracy of correlational speckle tracking in deformed OCT images is strongly affected by the coherent nature of speckles, for which strain-induced complications of speckle “blinking” and “boiling” are typical. The tracking accuracy is further compromised by the usually more pronounced pixelated structure of OCT scans compared with digital photographic images in classical DIC applications. Processing of complex-valued OCT data (comprising both amplitude and phase) compared to intensity-only scans mitigates these deleterious effects to some degree. Criteria of the attainable speckle tracking accuracy and its dependence on the key OCT system parameters are established.
NASA Technical Reports Server (NTRS)
Sowers, J.; Mehrotra, R.; Sethi, I. K.
1989-01-01
A method for extracting road boundaries using the monochrome image of a visual road scene is presented. The statistical information regarding the intensity levels present in the image along with some geometrical constraints concerning the road are the basics of this approach. Results and advantages of this technique compared to others are discussed. The major advantages of this technique, when compared to others, are its ability to process the image in only one pass, to limit the area searched in the image using only knowledge concerning the road geometry and previous boundary information, and dynamically adjust for inconsistencies in the located boundary information, all of which helps to increase the efficacy of this technique.
Schwalenberg, Simon
2005-06-01
The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.
NASA Astrophysics Data System (ADS)
McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol
2016-06-01
Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.
NASA Technical Reports Server (NTRS)
Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha
2016-01-01
A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.
Ortiz-Rascón, E; Bruce, N C; Rodríguez-Rosales, A A; Garduño-Mejía, J
2016-03-01
We describe the behavior of linearity in diffuse imaging by evaluating the differences between time-resolved images produced by photons arriving at the detector at different times. Two approaches are considered: Monte Carlo simulations and experimental results. The images of two complete opaque bars embedded in a transparent or in a turbid medium with a slab geometry are analyzed; the optical properties of the turbid medium sample are close to those of breast tissue. A simple linearity test was designed involving a direct comparison between the intensity profile produced by two bars scanned at the same time and the intensity profile obtained by adding two profiles of each bar scanned one at a time. It is shown that the linearity improves substantially when short time of flight photons are used in the imaging process, but even then the nonlinear behavior prevails. As the edge response function (ERF) has been used widely for testing the spatial resolution in imaging systems, the main implication of a time dependent linearity is the weakness of the linearity assumption when evaluating the spatial resolution through the ERF in diffuse imaging systems, and the need to evaluate the spatial resolution by other methods.
The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-06-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.
Yuan, Shuai; Roney, Celeste A.; Wierwille, Jerry; Chen, Chao-Wei; Xu, Biying; Jiang, James; Ma, Hongzhou; Cable, Alex; Summers, Ronald M.; Chen, Yu
2010-01-01
Optical coherence tomography (OCT) provides high-resolution, cross-sectional imaging of tissue microstructure in situ and in real-time, while fluorescence molecular imaging (FMI) enables the visualization of basic molecular processes. There are great interests in combining these two modalities so that the tissue's structural and molecular information can be obtained simultaneously. This could greatly benefit biomedical applications such as detecting early diseases and monitoring therapeutic interventions. In this research, an optical system that combines OCT and FMI was developed. The system demonstrated that it could co-register en face OCT and FMI images with a 2.4 × 2.4 mm field of view. The transverse resolutions of OCT and FMI of the system are both ~10 μm. Capillary tubes filled with fluorescent dye Cy 5.5 in different concentrations under a scattering medium are used as the phantom. En face OCT images of the phantoms were obtained and successfully co-registered with FMI images that were acquired simultaneously. A linear relationship between FMI intensity and dye concentration was observed. The relationship between FMI intensity and target fluorescence tube depth measured by OCT images was also observed and compared with theoretical modeling. This relationship could help in correcting reconstructed dye concentration. Imaging of colon polyps of APCmin mouse model is presented as an example of biological applications of this co-registered OCT/FMI system. PMID:20009192
Wavefront sensorless adaptive optics ophthalmoscopy in the human eye
Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason
2011-01-01
Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779
Rapacchi, Stanislas; Wen, Han; Viallon, Magalie; Grenier, Denis; Kellman, Peter; Croisille, Pierre; Pai, Vinay M
2011-12-01
Diffusion-weighted imaging (DWI) using low b-values permits imaging of intravoxel incoherent motion in tissues. However, low b-value DWI of the human heart has been considered too challenging because of additional signal loss due to physiological motion, which reduces both signal intensity and the signal-to-noise ratio (SNR). We address these signal loss concerns by analyzing cardiac motion during a heartbeat to determine the time-window during which cardiac bulk motion is minimal. Using this information to optimize the acquisition of DWI data and combining it with a dedicated image processing approach has enabled us to develop a novel low b-value diffusion-weighted cardiac magnetic resonance imaging approach, which significantly reduces intravoxel incoherent motion measurement bias introduced by motion. Simulations from displacement encoded motion data sets permitted the delineation of an optimal time-window with minimal cardiac motion. A number of single-shot repetitions of low b-value DWI cardiac magnetic resonance imaging data were acquired during this time-window under free-breathing conditions with bulk physiological motion corrected for by using nonrigid registration. Principal component analysis (PCA) was performed on the registered images to improve the SNR, and temporal maximum intensity projection (TMIP) was applied to recover signal intensity from time-fluctuant motion-induced signal loss. This PCATMIP method was validated with experimental data, and its benefits were evaluated in volunteers before being applied to patients. Optimal time-window cardiac DWI in combination with PCATMIP postprocessing yielded significant benefits for signal recovery, contrast-to-noise ratio, and SNR in the presence of bulk motion for both numerical simulations and human volunteer studies. Analysis of mean apparent diffusion coefficient (ADC) maps showed homogeneous values among volunteers and good reproducibility between free-breathing and breath-hold acquisitions. The PCATMIP DWI approach also indicated its potential utility by detecting ADC variations in acute myocardial infarction patients. Studying cardiac motion may provide an appropriate strategy for minimizing the impact of bulk motion on cardiac DWI. Applying PCATMIP image processing improves low b-value DWI and enables reliable analysis of ADC in the myocardium. The use of a limited number of repetitions in a free-breathing mode also enables easier application in clinical conditions.
Remote sensing applications in evaluation of cadmium pollution effects
NASA Astrophysics Data System (ADS)
Kozma-Bognar, Veronika; Martin, Gizella; Berke, Jozsef
2013-04-01
According to the 21st century developments in information technology the remote sensing applications open new perspectives to the data collection of our environment. Using the images in different spectral bands we get more reliable and accurate information about the condition, process and phenomena of the earth surface compared to the traditional aircraft image technologies (RGB images). The effects of particulate pollution originated from road traffic were analysed by the research team of Department of Meteorology and Water Management (University of Pannonia, Georgikon Faculty) with the application of visible, near infrared and thermal infrared remote sensing aircraft images. In the scope of our research was to detect and monitor the effects of heavy metal contamination in plant-atmosphere system under field experiments. The testing area was situated at Agro-meteorological Research Station in Keszthely (Hungary), where maize crops were polluted once a week (0,5 M concentration) by cadmium. In our study we simulated the effects of cadmium pollution because this element is one of the most common toxic heavy metals in our environment. During two growing seasons (2011, 2012) time-series analyses were carried out based on the remote sensing data and parallel collected variables of field measurement. In each phenological phases of plant we took aerial images, in order to follow the changes of the structure and intensity values of plots images. The spatial resolution of these images were under 10x10 cm, which allowed to use a plot-level evaluation. The structural and intensity based measurement evaluation methods were applied to examine cadmium polluted and control maize canopy after data pre-processing. Research activities also focused on the examination of the influence of the irrigation and the comparison of aerial and terrain parameters. As conclusion, it could be determined the quantification of cadmium pollution effects is possible on maize plants by using remote sensing technologies. The adverse effects on maize appear not immediately. During the growing season, the cadmium accumulation is plants caused slow changes and disorders that caused changes in structure and intensity values of the images. Consequently, the cadmium polluted and control plants could be differentiated by the average values of the intensity. According to our expectation the average intensity values showed decreasing tendency effect of cadmium pollution and the irrigation influences the effect of cadmium contamination. This research was realized in the frames of TÁMOP 4.2.4. A/1-11-1-2012-0001 "National Excellence Program - Elaborating and operating an inland student and researcher personal support system" The project was subsidized by the European Union and co-financed by the European Social Fund. This article was made partly under the project TÁMOP-4.2.2/B-10/1-2010-0025. This project is supported by the European Union and co-financed by the European Social Fund.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gratama van Andel, H. A. F.; Venema, H. W.; Streekstra, G. J.
For clear visualization of vessels in CT angiography (CTA) images of the head and neck using maximum intensity projection (MIP) or volume rendering (VR) bone has to be removed. In the past we presented a fully automatic method to mask the bone [matched mask bone elimination (MMBE)] for this purpose. A drawback is that vessels adjacent to bone may be partly masked as well. We propose a modification, multiscale MMBE, which reduces this problem by using images at two scales: a higher resolution than usual for image processing and a lower resolution to which the processed images are transformed formore » use in the diagnostic process. A higher in-plane resolution is obtained by the use of a sharper reconstruction kernel. The out-of-plane resolution is improved by deconvolution or by scanning with narrower collimation. The quality of the mask that is used to remove bone is improved by using images at both scales. After masking, the desired resolution for the normal clinical use of the images is obtained by blurring with Gaussian kernels of appropriate widths. Both methods (multiscale and original) were compared in a phantom study and with clinical CTA data sets. With the multiscale approach the width of the strip of soft tissue adjacent to the bone that is masked can be reduced from 1.0 to 0.2 mm without reducing the quality of the bone removal. The clinical examples show that vessels adjacent to bone are less affected and therefore better visible. Images processed with multiscale MMBE have a slightly higher noise level or slightly reduced resolution compared with images processed by the original method and the reconstruction and processing time is also somewhat increased. Nevertheless, multiscale MMBE offers a way to remove bone automatically from CT angiography images without affecting the integrity of the blood vessels. The overall image quality of MIP or VR images is substantially improved relative to images processed with the original MMBE method.« less
Removal of bone in CT angiography by multiscale matched mask bone elimination.
Gratama van Andel, H A F; Venema, H W; Streekstra, G J; van Straten, M; Majoie, C B L M; den Heeten, G J; Grimbergen, C A
2007-10-01
For clear visualization of vessels in CT angiography (CTA) images of the head and neck using maximum intensity projection (MIP) or volume rendering (VR) bone has to be removed. In the past we presented a fully automatic method to mask the bone [matched mask bone elimination (MMBE)] for this purpose. A drawback is that vessels adjacent to bone may be partly masked as well. We propose a modification, multiscale MMBE, which reduces this problem by using images at two scales: a higher resolution than usual for image processing and a lower resolution to which the processed images are transformed for use in the diagnostic process. A higher in-plane resolution is obtained by the use of a sharper reconstruction kernel. The out-of-plane resolution is improved by deconvolution or by scanning with narrower collimation. The quality of the mask that is used to remove bone is improved by using images at both scales. After masking, the desired resolution for the normal clinical use of the images is obtained by blurring with Gaussian kernels of appropriate widths. Both methods (multiscale and original) were compared in a phantom study and with clinical CTA data sets. With the multiscale approach the width of the strip of soft tissue adjacent to the bone that is masked can be reduced from 1.0 to 0.2 mm without reducing the quality of the bone removal. The clinical examples show that vessels adjacent to bone are less affected and therefore better visible. Images processed with multiscale MMBE have a slightly higher noise level or slightly reduced resolution compared with images processed by the original method and the reconstruction and processing time is also somewhat increased. Nevertheless, multiscale MMBE offers a way to remove bone automatically from CT angiography images without affecting the integrity of the blood vessels. The overall image quality of MIP or VR images is substantially improved relative to images processed with the original MMBE method.
Comparison of approaches for mobile document image analysis using server supported smartphones
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.
Quantification of intensity variations in functional MR images using rotated principal components
NASA Astrophysics Data System (ADS)
Backfrieder, W.; Baumgartner, R.; Sámal, M.; Moser, E.; Bergmann, H.
1996-08-01
In functional MRI (fMRI), the changes in cerebral haemodynamics related to stimulated neural brain activity are measured using standard clinical MR equipment. Small intensity variations in fMRI data have to be detected and distinguished from non-neural effects by careful image analysis. Based on multivariate statistics we describe an algorithm involving oblique rotation of the most significant principal components for an estimation of the temporal and spatial distribution of the stimulated neural activity over the whole image matrix. This algorithm takes advantage of strong local signal variations. A mathematical phantom was designed to generate simulated data for the evaluation of the method. In simulation experiments, the potential of the method to quantify small intensity changes, especially when processing data sets containing multiple sources of signal variations, was demonstrated. In vivo fMRI data collected in both visual and motor stimulation experiments were analysed, showing a proper location of the activated cortical regions within well known neural centres and an accurate extraction of the activation time profile. The suggested method yields accurate absolute quantification of in vivo brain activity without the need of extensive prior knowledge and user interaction.
Ghassemi, Rezwan; Brown, Robert; Narayanan, Sridar; Banwell, Brenda; Nakamura, Kunio; Arnold, Douglas L
2015-01-01
Intensity variation between magnetic resonance images (MRI) hinders comparison of tissue intensity distributions in multicenter MRI studies of brain diseases. The available intensity normalization techniques generally work well in healthy subjects but not in the presence of pathologies that affect tissue intensity. One such disease is multiple sclerosis (MS), which is associated with lesions that prominently affect white matter (WM). To develop a T1-weighted (T1w) image intensity normalization method that is independent of WM intensity, and to quantitatively evaluate its performance. We calculated median intensity of grey matter and intraconal orbital fat on T1w images. Using these two reference tissue intensities we calculated a linear normalization function and applied this to the T1w images to produce normalized T1w (NT1) images. We assessed performance of our normalization method for interscanner, interprotocol, and longitudinal normalization variability, and calculated the utility of the normalization method for lesion analyses in clinical trials. Statistical modeling showed marked decreases in T1w intensity differences after normalization (P < .0001). We developed a WM-independent T1w MRI normalization method and tested its performance. This method is suitable for longitudinal multicenter clinical studies for the assessment of the recovery or progression of disease affecting WM. Copyright © 2014 by the American Society of Neuroimaging.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2017-03-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ ( h ), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h . Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O ( n 2 ) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor.
Semivariogram Analysis of Bone Images Implemented on FPGA Architectures
Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang
2016-01-01
Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n2) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current implementation is focused on isotropic semivariogram computations only. The algorithm is benchmarked using VHDL on a Xilinx XUPV5-LX110T development Kit, which utilizes the Virtex5 FPGA. Medical image data from DXA scans are utilized for the experiments. Implementation results show that a significant advantage in computational speed is attained by the architectures with respect to implementation on a personal computer with an Intel i7 multi-core processor. PMID:28428829
Functional dissociation of stimulus intensity encoding and predictive coding of pain in the insula
Geuter, Stephan; Boll, Sabrina; Eippert, Falk; Büchel, Christian
2017-01-01
The computational principles by which the brain creates a painful experience from nociception are still unknown. Classic theories suggest that cortical regions either reflect stimulus intensity or additive effects of intensity and expectations, respectively. By contrast, predictive coding theories provide a unified framework explaining how perception is shaped by the integration of beliefs about the world with mismatches resulting from the comparison of these beliefs against sensory input. Using functional magnetic resonance imaging during a probabilistic heat pain paradigm, we investigated which computations underlie pain perception. Skin conductance, pupil dilation, and anterior insula responses to cued pain stimuli strictly followed the response patterns hypothesized by the predictive coding model, whereas posterior insula encoded stimulus intensity. This novel functional dissociation of pain processing within the insula together with previously observed alterations in chronic pain offer a novel interpretation of aberrant pain processing as disturbed weighting of predictions and prediction errors. DOI: http://dx.doi.org/10.7554/eLife.24770.001 PMID:28524817
Smartphones as image processing systems for prosthetic vision.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J
2013-01-01
The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.
A method for normalizing pathology images to improve feature extraction for quantitative pathology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tam, Allison; Barker, Jocelyn; Rubin, Daniel
Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology imagesmore » by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.« less
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
NASA Astrophysics Data System (ADS)
Laher, Russ
2012-08-01
Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.
The MRI appearances of cancellous allograft bone chips after the excision of bone tumours.
Kang, S; Han, I; Hong, S H; Cho, H S; Kim, W; Kim, H-S
2015-01-01
Cancellous allograft bone chips are commonly used in the reconstruction of defects in bone after removal of benign tumours. We investigated the MRI features of grafted bone chips and their change over time, and compared them with those with recurrent tumour. We retrospectively reviewed 66 post-operative MRIs from 34 patients who had undergone curettage and grafting with cancellous bone chips to fill the defect after excision of a tumour. All grafts showed consistent features at least six months after grafting: homogeneous intermediate or low signal intensities with or without scattered hyperintense foci (speckled hyperintensities) on T1 images; high signal intensities with scattered hypointense foci (speckled hypointensities) on T2 images, and peripheral rim enhancement with or without central heterogeneous enhancements on enhanced images. Incorporation of the graft occurred from the periphery to the centre, and was completed within three years. Recurrent lesions consistently showed the same signal intensities as those of pre-operative MRIs of the primary lesions. There were four misdiagnoses, three of which were chondroid tumours. We identified typical MRI features and clarified the incorporation process of grafted cancellous allograft bone chips. The most important characteristics of recurrent tumours were that they showed the same signal intensities as the primary tumours. It might sometimes be difficult to differentiate grafted cancellous allograft bone chips from a recurrent chondroid tumour. ©2015 The British Editorial Society of Bone & Joint Surgery.
Assessing the scale of tumor heterogeneity by complete hierarchical segmentation of MRI.
Gensheimer, Michael F; Hawkins, Douglas S; Ermoian, Ralph P; Trister, Andrew D
2015-02-07
In many cancers, intratumoral heterogeneity has been found in histology, genetic variation and vascular structure. We developed an algorithm to interrogate different scales of heterogeneity using clinical imaging. We hypothesize that heterogeneity of perfusion at coarse scale may correlate with treatment resistance and propensity for disease recurrence. The algorithm recursively segments the tumor image into increasingly smaller regions. Each dividing line is chosen so as to maximize signal intensity difference between the two regions. This process continues until the tumor has been divided into single voxels, resulting in segments at multiple scales. For each scale, heterogeneity is measured by comparing each segmented region to the adjacent region and calculating the difference in signal intensity histograms. Using digital phantom images, we showed that the algorithm is robust to image artifacts and various tumor shapes. We then measured the primary tumor scales of contrast enhancement heterogeneity in MRI of 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival. Coarser scale of maximum signal intensity heterogeneity was prognostic of shorter survival (p = 0.05). By contrast, two fractal parameters and three Haralick texture features were not prognostic. In summary, our algorithm produces a biologically motivated segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. If validated on a larger dataset, this prognostic imaging biomarker could be useful to identify patients at higher risk for recurrence and candidates for alternative treatment.
Robust image modeling techniques with an image restoration application
NASA Astrophysics Data System (ADS)
Kashyap, Rangasami L.; Eom, Kie-Bum
1988-08-01
A robust parameter-estimation algorithm for a nonsymmetric half-plane (NSHP) autoregressive model, where the driving noise is a mixture of a Gaussian and an outlier process, is presented. The convergence of the estimation algorithm is proved. An algorithm to estimate parameters and original image intensity simultaneously from the impulse-noise-corrupted image, where the model governing the image is not available, is also presented. The robustness of the parameter estimates is demonstrated by simulation. Finally, an algorithm to restore realistic images is presented. The entire image generally does not obey a simple image model, but a small portion (e.g., 8 x 8) of the image is assumed to obey an NSHP model. The original image is divided into windows and the robust estimation algorithm is applied for each window. The restoration algorithm is tested by comparing it to traditional methods on several different images.
Principal curve detection in complicated graph images
NASA Astrophysics Data System (ADS)
Liu, Yuncai; Huang, Thomas S.
2001-09-01
Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Maram, Reza; Van Howe, James; Li, Ming; Azaña, José
2014-01-01
Amplification of signal intensity is essential for initiating physical processes, diagnostics, sensing, communications and measurement. During traditional amplification, the signal is amplified by multiplying the signal carriers through an active gain process, requiring the use of an external power source. In addition, the signal is degraded by noise and distortions that typically accompany active gain processes. We show noiseless intensity amplification of repetitive optical pulse waveforms with gain from 2 to ~20 without using active gain. The proposed method uses a dispersion-induced temporal self-imaging (Talbot) effect to redistribute and coherently accumulate energy of the original repetitive waveforms into fewer replica waveforms. In addition, we show how our passive amplifier performs a real-time average of the wave-train to reduce its original noise fluctuation, as well as enhances the extinction ratio of pulses to stand above the noise floor. Our technique is applicable to repetitive waveforms in any spectral region or wave system. PMID:25319207
Synthesizing parallel imaging applications using the CAP (computer-aided parallelization) tool
NASA Astrophysics Data System (ADS)
Gennart, Benoit A.; Mazzariol, Marc; Messerli, Vincent; Hersch, Roger D.
1997-12-01
Imaging applications such as filtering, image transforms and compression/decompression require vast amounts of computing power when applied to large data sets. These applications would potentially benefit from the use of parallel processing. However, dedicated parallel computers are expensive and their processing power per node lags behind that of the most recent commodity components. Furthermore, developing parallel applications remains a difficult task: writing and debugging the application is difficult (deadlocks), programs may not be portable from one parallel architecture to the other, and performance often comes short of expectations. In order to facilitate the development of parallel applications, we propose the CAP computer-aided parallelization tool which enables application programmers to specify at a high-level of abstraction the flow of data between pipelined-parallel operations. In addition, the CAP tool supports the programmer in developing parallel imaging and storage operations. CAP enables combining efficiently parallel storage access routines and image processing sequential operations. This paper shows how processing and I/O intensive imaging applications must be implemented to take advantage of parallelism and pipelining between data access and processing. This paper's contribution is (1) to show how such implementations can be compactly specified in CAP, and (2) to demonstrate that CAP specified applications achieve the performance of custom parallel code. The paper analyzes theoretically the performance of CAP specified applications and demonstrates the accuracy of the theoretical analysis through experimental measurements.
NASA Technical Reports Server (NTRS)
Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.; Gilbert, Percy
1989-01-01
Computer-controlled thermal-wave microscope developed to investigate III-V compound semiconductor devices and materials. Is nondestructive technique providing information on subsurface thermal features of solid samples. Furthermore, because this is subsurface technique, three-dimensional imaging also possible. Microscope uses intensity-modulated electron beam of modified scanning electron microscope to generate thermal waves in sample. Acoustic waves generated by thermal waves received by transducer and processed in computer to form images displayed on video display of microscope or recorded on magnetic disk.
Applications in Digital Image Processing
ERIC Educational Resources Information Center
Silverman, Jason; Rosen, Gail L.; Essinger, Steve
2013-01-01
Students are immersed in a mathematically intensive, technological world. They engage daily with iPods, HDTVs, and smartphones--technological devices that rely on sophisticated but accessible mathematical ideas. In this article, the authors provide an overview of four lab-type activities that have been used successfully in high school mathematics…
NASA Astrophysics Data System (ADS)
Mehrübeoğlu, Mehrübe; McLauchlan, Lifford
2006-02-01
The goal of this project was to detect the intensity of traffic on a road at different times of the day during daytime. Although the work presented utilized images from a section of a highway, the results of this project are intended for making decisions on the type of intervention necessary on any given road at different times for traffic control, such as installation of traffic signals, duration of red, green and yellow lights at intersections, and assignment of traffic control officers near school zones or other relevant locations. In this project, directional patterns are used to detect and count the number of cars in traffic images over a fixed area of the road to determine local traffic intensity. Directional patterns are chosen because they are simple and common to almost all moving vehicles. Perspective vision effects specific to each camera orientation has to be considered, as they affect the size and direction of patterns to be recognized. In this work, a simple and fast algorithm has been developed based on horizontal directional pattern matching and perspective vision adjustment. The results of the algorithm under various conditions are presented and compared in this paper. Using the developed algorithm, the traffic intensity can accurately be determined on clear days with average sized cars. The accuracy is reduced on rainy days when the camera lens contains raindrops, when there are very long vehicles, such as trucks or tankers, in the view, and when there is very low light around dusk or dawn.
Affect-laden imagery and risk taking: the mediating role of stress and risk perception.
Traczyk, Jakub; Sobkow, Agata; Zaleskiewicz, Tomasz
2015-01-01
This paper investigates how affect-laden imagery that evokes emotional stress influences risk perception and risk taking in real-life scenarios. In a series of three studies, we instructed participants to imagine the consequences of risky scenarios and then rate the intensity of the experienced stress, perceived risk and their willingness to engage in risky behavior. Study 1 showed that people spontaneously imagine negative rather than positive risk consequences, which are directly related to their lower willingness to take risk. Moreover, this relationship was mediated by feelings of stress and risk perception. Study 2 replicated and extended these findings by showing that imagining negative risk consequences evokes psychophysiological stress responses observed in elevated blood pressure. Finally, in Study 3, we once again demonstrated that a higher intensity of mental images of negative risk consequences, as measured by enhanced brain activity in the parieto-occipital lobes, leads to a lower propensity to take risk. Furthermore, individual differences in creating vivid and intense negative images of risk consequences moderated the strength of the relationship between risk perception and risk taking. Participants who created more vivid and intense images of negative risk consequences paid less attention to the assessments of riskiness in rating their likelihood to take risk. To summarize, we showed that feelings of emotional stress and perceived riskiness mediate the relationship between mental imagery and risk taking, whereas individual differences in abilities to create vivid mental images may influence the degree to which more cognitive risk assessments are used in the risk-taking process.
Affect-Laden Imagery and Risk Taking: The Mediating Role of Stress and Risk Perception
2015-01-01
This paper investigates how affect-laden imagery that evokes emotional stress influences risk perception and risk taking in real-life scenarios. In a series of three studies, we instructed participants to imagine the consequences of risky scenarios and then rate the intensity of the experienced stress, perceived risk and their willingness to engage in risky behavior. Study 1 showed that people spontaneously imagine negative rather than positive risk consequences, which are directly related to their lower willingness to take risk. Moreover, this relationship was mediated by feelings of stress and risk perception. Study 2 replicated and extended these findings by showing that imagining negative risk consequences evokes psychophysiological stress responses observed in elevated blood pressure. Finally, in Study 3, we once again demonstrated that a higher intensity of mental images of negative risk consequences, as measured by enhanced brain activity in the parieto-occipital lobes, leads to a lower propensity to take risk. Furthermore, individual differences in creating vivid and intense negative images of risk consequences moderated the strength of the relationship between risk perception and risk taking. Participants who created more vivid and intense images of negative risk consequences paid less attention to the assessments of riskiness in rating their likelihood to take risk. To summarize, we showed that feelings of emotional stress and perceived riskiness mediate the relationship between mental imagery and risk taking, whereas individual differences in abilities to create vivid mental images may influence the degree to which more cognitive risk assessments are used in the risk-taking process. PMID:25816238
Hanyuda, Hitoshi; Otonari-Yamamoto, Mika; Imoto, Kenichi; Sakamoto, Junichiro; Kodama, Sayaka; Kamio, Takashi; Sano, Tsukasa
2013-01-01
The aim of this study was to elucidate possible elements in minimal amounts of fluid (MF) in the temporomandibular joint by analyzing signal intensities in T2-weighted and fluid-attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. Fifteen joints (15 patients) with MF were subjected to MR imaging to obtain T2-weighted and FLAIR images. Regions of interest were placed on MF, cerebrospinal fluid (CSF), and gray matter (GM), and their signal intensities were measured on both images. The signal intensity ratio (SIR) obtained by the signal intensity of GM between MF and CSF was compared in T2-weighted and FLAIR images. The average SIR of MF was lower than that of CSF on T2-weighted images, whereas it was higher on FLAIR images. The average suppression ratio of the signal intensity was lower for MF (24.1%) than for CSF (71.4%). MF may contain elements such as protein that are capable of inducing a shortened T1 relaxation time on MR images. Copyright © 2013 Elsevier Inc. All rights reserved.
Embedding intensity image into a binary hologram with strong noise resistant capability
NASA Astrophysics Data System (ADS)
Zhuang, Zhaoyong; Jiao, Shuming; Zou, Wenbin; Li, Xia
2017-11-01
A digital hologram can be employed as a host image for image watermarking applications to protect information security. Past research demonstrates that a gray level intensity image can be embedded into a binary Fresnel hologram by error diffusion method or bit truncation coding method. However, the fidelity of the retrieved watermark image from binary hologram is generally not satisfactory, especially when the binary hologram is contaminated with noise. To address this problem, we propose a JPEG-BCH encoding method in this paper. First, we employ the JPEG standard to compress the intensity image into a binary bit stream. Next, we encode the binary bit stream with BCH code to obtain error correction capability. Finally, the JPEG-BCH code is embedded into the binary hologram. By this way, the intensity image can be retrieved with high fidelity by a BCH-JPEG decoder even if the binary hologram suffers from serious noise contamination. Numerical simulation results show that the image quality of retrieved intensity image with our proposed method is superior to the state-of-the-art work reported.
Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki
2017-01-01
Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073
Image enhancement of real-time television to benefit the visually impaired.
Wolffsohn, James S; Mukhopadhyay, Ditipriya; Rubinstein, Martin
2007-09-01
To examine the use of real-time, generic edge detection, image processing techniques to enhance the television viewing of the visually impaired. Prospective, clinical experimental study. One hundred and two sequential visually impaired (average age 73.8 +/- 14.8 years; 59% female) in a single center optimized a dynamic television image with respect to edge detection filter (Prewitt, Sobel, or the two combined), color (red, green, blue, or white), and intensity (one to 15 times) of the overlaid edges. They then rated the original television footage compared with a black-and-white image displaying the edges detected and the original television image with the detected edges overlaid in the chosen color and at the intensity selected. Footage of news, an advertisement, and the end of program credits were subjectively assessed in a random order. A Prewitt filter was preferred (44%) compared with the Sobel filter (27%) or a combination of the two (28%). Green and white were equally popular for displaying the detected edges (32%), with blue (22%) and red (14%) less so. The average preferred edge intensity was 3.5 +/- 1.7 times. The image-enhanced television was significantly preferred to the original (P < .001), which in turn was preferred to viewing the detected edges alone (P < .001) for each of the footage clips. Preference was not dependent on the condition causing visual impairment. Seventy percent were definitely willing to buy a set-top box that could achieve these effects for a reasonable price. Simple generic edge detection image enhancement options can be performed on television in real-time and significantly enhance the viewing of the visually impaired.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riegel, Adam C. B.A.; Chang, Joe Y.; Vedam, Sastry S.
2009-02-01
Purpose: To determine whether cine computed tomography (CT) can serve as an alternative to four-dimensional (4D)-CT by providing tumor motion information and producing equivalent target volumes when used to contour in radiotherapy planning without a respiratory surrogate. Methods and Materials: Cine CT images from a commercial CT scanner were used to form maximum intensity projection and respiratory-averaged CT image sets. These image sets then were used together to define the targets for radiotherapy. Phantoms oscillating under irregular motion were used to assess the differences between contouring using cine CT and 4D-CT. We also retrospectively reviewed the image sets for 26more » patients (27 lesions) at our institution who had undergone stereotactic radiotherapy for Stage I non-small-cell lung cancer. The patients were included if the tumor motion was >1 cm. The lesions were first contoured using maximum intensity projection and respiratory-averaged CT image sets processed from cine CT and then with 4D-CT maximum intensity projection and 10-phase image sets. The mean ratios of the volume magnitude were compared with intraobserver variation, the mean centroid shifts were calculated, and the volume overlap was assessed with the normalized Dice similarity coefficient index. Results: The phantom studies demonstrated that cine CT captured a greater extent of irregular tumor motion than did 4D-CT, producing a larger tumor volume. The patient studies demonstrated that the gross tumor defined using cine CT imaging was similar to, or slightly larger than, that defined using 4D-CT. Conclusion: The results of our study have shown that cine CT is a promising alternative to 4D-CT for stereotactic radiotherapy planning.« less
Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics
Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D.; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei
2017-01-01
Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that “Electron Tracking Compton Camera” (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics. PMID:28155870
Establishment of Imaging Spectroscopy of Nuclear Gamma-Rays based on Geometrical Optics.
Tanimori, Toru; Mizumura, Yoshitaka; Takada, Atsushi; Miyamoto, Shohei; Takemura, Taito; Kishimoto, Tetsuro; Komura, Shotaro; Kubo, Hidetoshi; Kurosawa, Shunsuke; Matsuoka, Yoshihiro; Miuchi, Kentaro; Mizumoto, Tetsuya; Nakamasu, Yuma; Nakamura, Kiseki; Parker, Joseph D; Sawano, Tatsuya; Sonoda, Shinya; Tomono, Dai; Yoshikawa, Kei
2017-02-03
Since the discovery of nuclear gamma-rays, its imaging has been limited to pseudo imaging, such as Compton Camera (CC) and coded mask. Pseudo imaging does not keep physical information (intensity, or brightness in Optics) along a ray, and thus is capable of no more than qualitative imaging of bright objects. To attain quantitative imaging, cameras that realize geometrical optics is essential, which would be, for nuclear MeV gammas, only possible via complete reconstruction of the Compton process. Recently we have revealed that "Electron Tracking Compton Camera" (ETCC) provides a well-defined Point Spread Function (PSF). The information of an incoming gamma is kept along a ray with the PSF and that is equivalent to geometrical optics. Here we present an imaging-spectroscopic measurement with the ETCC. Our results highlight the intrinsic difficulty with CCs in performing accurate imaging, and show that the ETCC surmounts this problem. The imaging capability also helps the ETCC suppress the noise level dramatically by ~3 orders of magnitude without a shielding structure. Furthermore, full reconstruction of Compton process with the ETCC provides spectra free of Compton edges. These results mark the first proper imaging of nuclear gammas based on the genuine geometrical optics.
Deng, Hang; Fitts, Jeffrey P.; Peters, Catherine A.
2016-02-01
This paper presents a new method—the Technique of Iterative Local Thresholding (TILT)—for processing 3D X-ray computed tomography (xCT) images for visualization and quantification of rock fractures. The TILT method includes the following advancements. First, custom masks are generated by a fracture-dilation procedure, which significantly amplifies the fracture signal on the intensity histogram used for local thresholding. Second, TILT is particularly well suited for fracture characterization in granular rocks because the multi-scale Hessian fracture (MHF) filter has been incorporated to distinguish fractures from pores in the rock matrix. Third, TILT wraps the thresholding and fracture isolation steps in an optimized iterativemore » routine for binary segmentation, minimizing human intervention and enabling automated processing of large 3D datasets. As an illustrative example, we applied TILT to 3D xCT images of reacted and unreacted fractured limestone cores. Other segmentation methods were also applied to provide insights regarding variability in image processing. The results show that TILT significantly enhanced separability of grayscale intensities, outperformed the other methods in automation, and was successful in isolating fractures from the porous rock matrix. Because the other methods are more likely to misclassify fracture edges as void and/or have limited capacity in distinguishing fractures from pores, those methods estimated larger fracture volumes (up to 80 %), surface areas (up to 60 %), and roughness (up to a factor of 2). In conclusion, these differences in fracture geometry would lead to significant disparities in hydraulic permeability predictions, as determined by 2D flow simulations.« less
Exploiting range imagery: techniques and applications
NASA Astrophysics Data System (ADS)
Armbruster, Walter
2009-07-01
Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.
Deformable registration of CT and cone-beam CT with local intensity matching.
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-07
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Deformable registration of CT and cone-beam CT with local intensity matching
NASA Astrophysics Data System (ADS)
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-01
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
The Coronal Monsoon: Thermal Nonequilibrium Revealed by Periodic Coronal Rain
NASA Astrophysics Data System (ADS)
Auchère, Frédéric; Froment, Clara; Soubrié, Elie; Antolin, Patrick; Oliver, Ramon; Pelouze, Gabriel
2018-02-01
We report on the discovery of periodic coronal rain in an off-limb sequence of Solar Dynamics Observatory/Atmospheric Imaging Assembly images. The showers are co-spatial and in phase with periodic (6.6 hr) intensity pulsations of coronal loops of the sort described by Auchère et al. and Froment et al. These new observations make possible a unified description of both phenomena. Coronal rain and periodic intensity pulsations of loops are two manifestations of the same physical process: evaporation/condensation cycles resulting from a state of thermal nonequilibrium. The fluctuations around coronal temperatures produce the intensity pulsations of loops, and rain falls along their legs if thermal runaway cools the periodic condensations down and below transition-region temperatures. This scenario is in line with the predictions of numerical models of quasi-steadily and footpoint heated loops. The presence of coronal rain—albeit non-periodic—in several other structures within the studied field of view implies that this type of heating is at play on a large scale.
Lithographic image simulation for the 21st century with 19th-century tools
NASA Astrophysics Data System (ADS)
Gordon, Ronald L.; Rosenbluth, Alan E.
2004-01-01
Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.
NASA Astrophysics Data System (ADS)
Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.
2005-01-01
A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.
Robust active contour via additive local and global intensity information based on local entropy
NASA Astrophysics Data System (ADS)
Yuan, Shuai; Monkam, Patrice; Zhang, Feng; Luan, Fangjun; Koomson, Ben Alfred
2018-01-01
Active contour-based image segmentation can be a very challenging task due to many factors such as high intensity inhomogeneity, presence of noise, complex shape, weak boundaries objects, and dependence on the position of the initial contour. We propose a level set-based active contour method to segment complex shape objects from images corrupted by noise and high intensity inhomogeneity. The energy function of the proposed method results from combining the global intensity information and local intensity information with some regularization factors. First, the global intensity term is proposed based on a scheme formulation that considers two intensity values for each region instead of one, which outperforms the well-known Chan-Vese model in delineating the image information. Second, the local intensity term is formulated based on local entropy computed considering the distribution of the image brightness and using the generalized Gaussian distribution as the kernel function. Therefore, it can accurately handle high intensity inhomogeneity and noise. Moreover, our model is not dependent on the position occupied by the initial curve. Finally, extensive experiments using various images have been carried out to illustrate the performance of the proposed method.
Computerized image analysis for acetic acid induced intraepithelial lesions
NASA Astrophysics Data System (ADS)
Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.
2008-03-01
Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.
WE-G-209-01: Digital Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schueler, B.
Digital radiography, CT, PET, and MR are complicated imaging modalities which are composed of many hardware and software components. These components work together in a highly coordinated chain of events with the intent to produce high quality images. Acquisition, processing and reconstruction of data must occur in a precise way for optimum image quality to be achieved. Any error or unexpected event in the entire process can produce unwanted pixel intensities in the final images which may contribute to visible image artifacts. The diagnostic imaging physicist is uniquely qualified to investigate and contribute to resolution of image artifacts. This coursemore » will teach the participant to identify common artifacts found clinically in digital radiography, CT, PET, and MR, to determine the causes of artifacts, and to make recommendations for how to resolve artifacts. Learning Objectives: Identify common artifacts found clinically in digital radiography, CT, PET and MR. Determine causes of various clinical artifacts from digital radiography, CT, PET and MR. Describe how to resolve various clinical artifacts from digital radiography, CT, PET and MR.« less
Applications of LANDSAT data to the integrated economic development of Mindoro, Phillipines
NASA Technical Reports Server (NTRS)
Wagner, T. W.; Fernandez, J. C.
1977-01-01
LANDSAT data is seen as providing essential up-to-date resource information for the planning process. LANDSAT data of Mindoro Island in the Philippines was processed to provide thematic maps showing patterns of agriculture, forest cover, terrain, wetlands and water turbidity. A hybrid approach using both supervised and unsupervised classification techniques resulted in 30 different scene classes which were subsequently color-coded and mapped at a scale of 1:250,000. In addition, intensive image analysis is being carried out in evaluating the images. The images, maps, and aerial statistics are being used to provide data to seven technical departments in planning the economic development of Mindoro. Multispectral aircraft imagery was collected to compliment the application of LANDSAT data and validate the classification results.
NASA Astrophysics Data System (ADS)
Picard de Muller, Gaël; Ait-Belkacem, Rima; Bonnel, David; Longuespée, Rémi; Stauber, Jonathan
2017-12-01
Mass spectrometry imaging datasets are mostly analyzed in terms of average intensity in regions of interest. However, biological tissues have different morphologies with several sizes, shapes, and structures. The important biological information, contained in this highly heterogeneous cellular organization, could be hidden by analyzing the average intensities. Finding an analytical process of morphology would help to find such information, describe tissue model, and support identification of biomarkers. This study describes an informatics approach for the extraction and identification of mass spectrometry image features and its application to sample analysis and modeling. For the proof of concept, two different tissue types (healthy kidney and CT-26 xenograft tumor tissues) were imaged and analyzed. A mouse kidney model and tumor model were generated using morphometric - number of objects and total surface - information. The morphometric information was used to identify m/z that have a heterogeneous distribution. It seems to be a worthwhile pursuit as clonal heterogeneity in a tumor is of clinical relevance. This study provides a new approach to find biomarker or support tissue classification with more information. [Figure not available: see fulltext.
Sifting Through SDO's AIA Cosmic Ray Hits to Find Treasure
NASA Astrophysics Data System (ADS)
Kirk, M. S.; Thompson, B. J.; Viall, N. M.; Young, P. R.
2017-12-01
The Solar Dynamics Observatory's Atmospheric Imaging Assembly (SDO AIA) has revolutionized solar imaging with its high temporal and spatial resolution, unprecedented spatial and temporal coverage, and seven EUV channels. Automated algorithms routinely clean these images to remove cosmic ray intensity spikes as a part of its preprocessing algorithm. We take a novel approach to survey the entire set of AIA "spike" data to identify and group compact brightenings across the entire SDO mission. The AIA team applies a de-spiking algorithm to remove magnetospheric particle impacts on the CCD cameras, but it has been found that compact, intense solar brightenings are often removed as well. We use the spike database to mine the data and form statistics on compact solar brightenings without having to process large volumes of full-disk AIA data. There are approximately 3 trillion "spiked pixels" removed from images over the mission to date. We estimate that 0.001% of those are of solar origin and removed by mistake, giving us a pre-segmented dataset of 30 million events. We explore the implications of these statistics and the physical qualities of the "spikes" of solar origin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kundu, B.K.; Stolin, A.V.; Pole, J.
Our group is developing a scanner that combines x-ray, single gamma, and optical imaging on the same rotating gantry. Two functional modalities (SPECT and optical) are included because they have different strengths and weaknesses in terms of spatial and temporal decay lengths in the context of in vivo imaging, and because of the recent advent of multiple reporter gene constructs. The effect of attenuation by biological tissue on the detected intensity of the emitted signal was measured for both gamma and optical imaging. Attenuation by biological tissue was quantified for both the bioluminescent emission of luciferace and for the emissionmore » light of the near infrared fluorophore cyanine 5.5, using a fixed excitation light intensity. Experiments were performed to test the feasibility of using either single gamma or x-ray imaging to make depth-dependent corrections to the measured optical signal. Our results suggest that significant improvements in quantitation of optical emission are possible using straightforward correction techniques based on information from other modalities. Development of an integrated scanner in which data from each modality are obtained with the animal in a common configuration will greatly simplify this process.« less
Multi-mode of Four and Six Wave Parametric Amplified Process
NASA Astrophysics Data System (ADS)
Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng
2017-03-01
Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.
Multi-mode of Four and Six Wave Parametric Amplified Process.
Zhu, Dayu; Yang, Yiheng; Zhang, Da; Liu, Ruizhou; Ma, Danmeng; Li, Changbiao; Zhang, Yanpeng
2017-03-03
Multiple quantum modes in correlated fields are essential for future quantum information processing and quantum computing. Here we report the generation of multi-mode phenomenon through parametric amplified four- and six-wave mixing processes in a rubidium atomic ensemble. The multi-mode properties in both frequency and spatial domains are studied. On one hand, the multi-mode behavior is dominantly controlled by the intensity of external dressing effect, or nonlinear phase shift through internal dressing effect, in frequency domain; on the other hand, the multi-mode behavior is visually demonstrated from the images of the biphoton fields directly, in spatial domain. Besides, the correlation of the two output fields is also demonstrated in both domains. Our approach supports efficient applications for scalable quantum correlated imaging.
Integration of optical imaging with a small animal irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An
Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less
3D imaging LADAR with linear array devices: laser, detector and ROIC
NASA Astrophysics Data System (ADS)
Kameyama, Shumpei; Imaki, Masaharu; Tamagawa, Yasuhisa; Akino, Yosuke; Hirai, Akihito; Ishimura, Eitaro; Hirano, Yoshihito
2009-07-01
This paper introduces the recent development of 3D imaging LADAR (LAser Detection And Ranging) in Mitsubishi Electric Corporation. The system consists of in-house-made key devices which are linear array: the laser, the detector and the ROIC (Read-Out Integrated Circuit). The laser transmitter is the high power and compact planar waveguide array laser at the wavelength of 1.5 micron. The detector array consists of the low excess noise Avalanche Photo Diode (APD) using the InAlAs multiplication layer. The analog ROIC array, which is fabricated in the SiGe- BiCMOS process, includes the Trans-Impedance Amplifiers (TIA), the peak intensity detectors, the Time-Of-Flight (TOF) detectors, and the multiplexers for read-out. This device has the feature in its detection ability for the small signal by optimizing the peak intensity detection circuit. By combining these devices with the one dimensional fast scanner, the real-time 3D range image can be obtained. After the explanations about the key devices, some 3D imaging results are demonstrated using the single element key devices. The imaging using the developed array devices is planned in the near future.
Interferometric synthetic aperture radar (InSAR)—its past, present and future
Lu, Zhong; Kwoun, Oh-Ig; Rykhus, R.P.
2007-01-01
Very simply, interferometric synthetic aperture radar (InSAR) involves the use of two or more synthetic aperture radar (SAR) images of the same area to extract landscape topography and its deformation patterns. A SAR system transmits electromagnetic waves at a wavelength that can range from a few millimeters to tens of centimeters and therefore can operate during day and night under all-weather conditions. Using SAR processing technique (Curlander and McDonough, 1991), both the intensity and phase of the reflected (or backscattered) radar signal of each ground resolution element (a few meters to tens of meters) can be calculated in the form of a complex-valued SAR image that represents the reflectivity of the ground surface. The amplitude or intensity of the SAR image is determined primarily by terrain slope, surface roughness, and dielectric constants, whereas the phase of the SAR image is determined primarily by the distance between the satellite antenna and the ground targets. InSAR imaging utilizes the interaction of electromagnetic waves, referred to as interference, to measure precise distances between the satellite antenna and ground resolution elements to derive landscape topography and its subtle change in elevation.
Sub-pixel localisation of passive micro-coil fiducial markers in interventional MRI.
Rea, Marc; McRobbie, Donald; Elhawary, Haytham; Tse, Zion T H; Lamperth, Michael; Young, Ian
2009-04-01
Electromechanical devices enable increased accuracy in surgical procedures, and the recent development of MRI-compatible mechatronics permits the use of MRI for real-time image guidance. Integrated imaging of resonant micro-coil fiducials provides an accurate method of tracking devices in a scanner with increased flexibility compared to gradient tracking. Here we report on the ability of ten different image-processing algorithms to track micro-coil fiducials with sub-pixel accuracy. Five algorithms: maximum pixel, barycentric weighting, linear interpolation, quadratic fitting and Gaussian fitting were applied both directly to the pixel intensity matrix and to the cross-correlation matrix obtained by 2D convolution with a reference image. Using images of a 3 mm fiducial marker and a pixel size of 1.1 mm, intensity linear interpolation, which calculates the position of the fiducial centre by interpolating the pixel data to find the fiducial edges, was found to give the best performance for minimal computing power; a maximum error of 0.22 mm was observed in fiducial localisation for displacements up to 40 mm. The inherent standard deviation of fiducial localisation was 0.04 mm. This work enables greater accuracy to be achieved in passive fiducial tracking.
A hybrid multimodal non-rigid registration of MR images based on diffeomorphic demons.
Lu, Huanxiang; Cattin, Philippe C; Reyes, Mauricio
2010-01-01
In this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.
Plenoptic Ophthalmoscopy: A Novel Imaging Technique.
Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason
2016-11-01
This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.
NASA Astrophysics Data System (ADS)
Zhang, Rongxiao; Jee, Kyung-Wook; Cascio, Ethan; Sharp, Gregory C.; Flanz, Jacob B.; Lu, Hsiao-Ming
2018-01-01
Proton radiography, which images patients with the same type of particles as those with which they are to be treated, is a promising approach to image guidance and water equivalent path length (WEPL) verification in proton radiation therapy. We have shown recently that proton radiographs could be obtained by measuring time-resolved dose rate functions (DRFs) using an x-ray amorphous silicon flat panel. The WEPL values were derived solely from the root-mean-square (RMS) of DRFs, while the intensity information in the DRFs was filtered out. In this work, we explored the use of such intensity information for potential improvement in WEPL accuracy and imaging quality. Three WEPL derivation methods based on, respectively, the RMS only, the intensity only, and the intensity-weighted RMS were tested and compared in terms of the quality of obtained radiograph images and the accuracy of WEPL values. A Gammex CT calibration phantom containing inserts made of various tissue substitute materials with independently measured relative stopping powers (RSP) was used to assess the imaging performances. Improved image quality with enhanced interfaces was achieved while preserving the accuracy by using intensity information in the calibration. Other objects, including an anthropomorphic head phantom, a proton therapy range compensator, a frozen lamb’s head and an ‘image quality phantom’ were also imaged. Both the RMS only and the intensity-weighted RMS methods derived RSPs within ± 1% for most of the Gammex phantom inserts, with a mean absolute percentage error of 0.66% for all inserts. In the case of the insert with a titanium rod, the method based on RMS completely failed, whereas that based on the intensity-weighted RMS was qualitatively valid. The use of intensity greatly enhanced the interfaces between different materials in the obtained WEPL images, suggesting the potential for image guidance in areas such as patient positioning and tumor tracking by proton radiography.
Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel
2015-01-01
Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Salvaging and Conserving Water Damaged Photographic Materials
NASA Astrophysics Data System (ADS)
Suzuki, Ryuji
Degradation of water damaged photographic materials is discussed; the most vulnerable elements are gelatin layers and silver image. A simple and inexpensive chemical treatment is proposed, consisting of a bath containing a gelatin-protecting biocide and a silver image protecting agent. These ingredients were selected among those used in manufacturing of silver halide photographic emulsions or processing chemicals. Experiments confirmed that this treatment significantly reduced oxidative attacks to silver image and bacterial degradation of gelatin layers. The treated material was also stable under intense light fading test. Method of hardening gelatin to suppress swelling is also discussed.
Resolution enhancement using simultaneous couple illumination
NASA Astrophysics Data System (ADS)
Hussain, Anwar; Martínez Fuentes, José Luis
2016-10-01
A super-resolution technique based on structured illumination created by a liquid crystal on silicon spatial light modulator (LCOS-SLM) is presented. Single and simultaneous pairs of tilted beams are generated to illuminate a target object. Resolution enhancement of an optical 4f system is demonstrated by using numerical simulations. The resulting intensity images are recorded at a charged couple device (CCD) and stored in the computer memory for further processing. One dimension enhancement can be performed with only 15 images. Two dimensional complete improvement requires 153 different images. The resolution of the optical system is extended three times compared to the band limited system.
Determining Object Orientation from a Single Image Using Multiple Information Sources.
1984-06-01
object surface. Location of the image ellipse is accomplished by exploiting knowledge about object boundaries and image intensity gradients . -. The...Using Intensity Gradient Information for Ellipse fitting ........ .51 4.3.7 Orientation From Ellipses .............................. 53 4.3.8 Application...object boundaries and image intensity gradients . The orientation information from each of these three methods is combined using a "plausibility" function
Robust generative asymmetric GMM for brain MR image segmentation.
Ji, Zexuan; Xia, Yong; Zheng, Yuhui
2017-11-01
Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.
Intensity ratio to improve black hole assessment in multiple sclerosis.
Adusumilli, Gautam; Trinkaus, Kathryn; Sun, Peng; Lancia, Samantha; Viox, Jeffrey D; Wen, Jie; Naismith, Robert T; Cross, Anne H
2018-01-01
Improved imaging methods are critical to assess neurodegeneration and remyelination in multiple sclerosis. Chronic hypointensities observed on T1-weighted brain MRI, "persistent black holes," reflect severe focal tissue damage. Present measures consist of determining persistent black holes numbers and volumes, but do not quantitate severity of individual lesions. Develop a method to differentiate black and gray holes and estimate the severity of individual multiple sclerosis lesions using standard magnetic resonance imaging. 38 multiple sclerosis patients contributed images. Intensities of lesions on T1-weighted scans were assessed relative to cerebrospinal fluid intensity using commercial software. Magnetization transfer imaging, diffusion tensor imaging and clinical testing were performed to assess associations with T1w intensity-based measures. Intensity-based assessments of T1w hypointensities were reproducible and achieved > 90% concordance with expert rater determinations of "black" and "gray" holes. Intensity ratio values correlated with magnetization transfer ratios (R = 0.473) and diffusion tensor imaging metrics (R values ranging from 0.283 to -0.531) that have been associated with demyelination and axon loss. Intensity ratio values incorporated into T1w hypointensity volumes correlated with clinical measures of cognition. This method of determining the degree of hypointensity within multiple sclerosis lesions can add information to conventional imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Colour image segmentation using unsupervised clustering technique for acute leukemia images
NASA Astrophysics Data System (ADS)
Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.
2015-05-01
Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.
Muscle segmentation in time series images of Drosophila metamorphosis.
Yadav, Kuleesha; Lin, Feng; Wasser, Martin
2015-01-01
In order to study genes associated with muscular disorders, we characterize the phenotypic changes in Drosophila muscle cells during metamorphosis caused by genetic perturbations. We collect in vivo images of muscle fibers during remodeling of larval to adult muscles. In this paper, we focus on the new image processing pipeline designed to quantify the changes in shape and size of muscles. We propose a new two-step approach to muscle segmentation in time series images. First, we implement a watershed algorithm to divide the image into edge-preserving regions, and then, we classify these regions into muscle and non-muscle classes on the basis of shape and intensity. The advantage of our method is two-fold: First, better results are obtained because classification of regions is constrained by the shape of muscle cell from previous time point; and secondly, minimal user intervention results in faster processing time. The segmentation results are used to compare the changes in cell size between controls and reduction of the autophagy related gene Atg 9 during Drosophila metamorphosis.
Jian, Yifan; Xu, Jing; Gradowski, Martin A.; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.
2014-01-01
We present wavefront sensorless adaptive optics (WSAO) Fourier domain optical coherence tomography (FD-OCT) for in vivo small animal retinal imaging. WSAO is attractive especially for mouse retinal imaging because it simplifies optical design and eliminates the need for wavefront sensing, which is difficult in the small animal eye. GPU accelerated processing of the OCT data permitted real-time extraction of image quality metrics (intensity) for arbitrarily selected retinal layers to be optimized. Modal control of a commercially available segmented deformable mirror (IrisAO Inc.) provided rapid convergence using a sequential search algorithm. Image quality improvements with WSAO OCT are presented for both pigmented and albino mouse retinal data, acquired in vivo. PMID:24575347
Observation of FeGe skyrmions by electron phase microscopy with hole-free phase plate
NASA Astrophysics Data System (ADS)
Kotani, Atsuhiro; Harada, Ken; Malac, Marek; Salomons, Mark; Hayashida, Misa; Mori, Shigeo
2018-05-01
We report application of hole-free phase plate (HFPP) to imaging of magnetic skyrmion lattices. Using HFPP imaging, we observed skyrmions in FeGe, and succeeded in obtaining phase contrast images that reflect the sample magnetization distribution. According to the Aharonov-Bohm effect, the electron phase is shifted by the magnetic flux due to sample magnetization. The differential processing of the intensity in a HFPP image allows us to successfully reconstruct the magnetization map of the skyrmion lattice. Furthermore, the calculated phase shift due to the magnetization of the thin film was consistent with that measured by electron holography experiment, which demonstrates that HFPP imaging can be utilized for analysis of magnetic fields and electrostatic potential distribution at the nanoscale.
Smooth 2D manifold extraction from 3D image stack
Shihavuddin, Asm; Basu, Sreetama; Rexhepaj, Elton; Delestro, Felipe; Menezes, Nikita; Sigoillot, Séverine M; Del Nery, Elaine; Selimi, Fekrije; Spassky, Nathalie; Genovesio, Auguste
2017-01-01
Three-dimensional fluorescence microscopy followed by image processing is routinely used to study biological objects at various scales such as cells and tissue. However, maximum intensity projection, the most broadly used rendering tool, extracts a discontinuous layer of voxels, obliviously creating important artifacts and possibly misleading interpretation. Here we propose smooth manifold extraction, an algorithm that produces a continuous focused 2D extraction from a 3D volume, hence preserving local spatial relationships. We demonstrate the usefulness of our approach by applying it to various biological applications using confocal and wide-field microscopy 3D image stacks. We provide a parameter-free ImageJ/Fiji plugin that allows 2D visualization and interpretation of 3D image stacks with maximum accuracy. PMID:28561033
Cardiac phase detection in intravascular ultrasound images
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Lemos, Pedro Alves; Yoneyama, Takashi; Furuie, Sergio Shiguemi
2008-03-01
Image gating is related to image modalities that involve quasi-periodic moving organs. Therefore, during intravascular ultrasound (IVUS) examination, there is cardiac movement interference. In this paper, we aim to obtain IVUS gated images based on the images themselves. This would allow the reconstruction of 3D coronaries with temporal accuracy for any cardiac phase, which is an advantage over the ECG-gated acquisition that shows a single one. It is also important for retrospective studies, as in existing IVUS databases there are no additional reference signals (ECG). From the images, we calculated signals based on average intensity (AI), and, from consecutive frames, average intensity difference (AID), cross-correlation coefficient (CC) and mutual information (MI). The process includes a wavelet-based filter step and ascendant zero-cross detection in order to obtain the phase information. Firstly, we tested 90 simulated sequences with 1025 frames each. Our method was able to achieve more than 95.0% of true positives and less than 2.3% of false positives ratio, for all signals. Afterwards, we tested in a real examination, with 897 frames and ECG as gold-standard. We achieved 97.4% of true positives (CC and MI), and 2.5% of false positives. For future works, methodology should be tested in wider range of IVUS examinations.
A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.
Calapez, Alexandre; Rosa, Agostinho
2010-09-01
Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.
Novel image processing method study for a label-free optical biosensor
NASA Astrophysics Data System (ADS)
Yang, Chenhao; Wei, Li'an; Yang, Rusong; Feng, Ying
2015-10-01
Optical biosensor is generally divided into labeled type and label-free type, the former mainly contains fluorescence labeled method and radioactive-labeled method, while fluorescence-labeled method is more mature in the application. The mainly image processing methods of fluorescent-labeled biosensor includes smooth filtering, artificial gridding and constant thresholding. Since some fluorescent molecules may influence the biological reaction, label-free methods have been the main developing direction of optical biosensors nowadays. The using of wider field of view and larger angle of incidence light path which could effectively improve the sensitivity of the label-free biosensor also brought more difficulties in image processing, comparing with the fluorescent-labeled biosensor. Otsu's method is widely applied in machine vision, etc, which choose the threshold to minimize the intraclass variance of the thresholded black and white pixels. It's capacity-constrained with the asymmetrical distribution of images as a global threshold segmentation. In order to solve the irregularity of light intensity on the transducer, we improved the algorithm. In this paper, we present a new image processing algorithm based on a reflectance modulation biosensor platform, which mainly comprises the design of sliding normalization algorithm for image rectification and utilizing the improved otsu's method for image segmentation, in order to implement automatic recognition of target areas. Finally we used adaptive gridding method extracting the target parameters for analysis. Those methods could improve the efficiency of image processing, reduce human intervention, enhance the reliability of experiments and laid the foundation for the realization of high throughput of label-free optical biosensors.
Object extraction method for image synthesis
NASA Astrophysics Data System (ADS)
Inoue, Seiki
1991-11-01
The extraction of component objects from images is fundamentally important for image synthesis. In TV program production, one useful method is the Video-Matte technique for specifying the necessary boundary of an object. This, however, involves some manually intricate and tedious processes. A new method proposed in this paper can reduce the needed level of operator skill and simplify object extraction. The object is automatically extracted by just a simple drawing of a thick boundary line. The basic principle involves a thinning of the thick boundary line binary image using the edge intensity of the original image. This method has many practical advantages, including the simplicity of specifying an object, the high accuracy of thinned-out boundary line, its ease of application to moving images, and the lack of any need for adjustment.
In vivo multiphoton imaging of bile duct ligation
NASA Astrophysics Data System (ADS)
Liu, Yuan; Li, Feng-Chieh; Chen, Hsiao-Chin; Chang, Po-shou; Yang, Shu-Mei; Lee, Hsuan-Shu; Dong, Chen-Yuan
2008-02-01
Bile is the exocrine secretion of liver and synthesized by hepatocytes. It is drained into duodenum for the function of digestion or drained into gallbladder for of storage. Bile duct obstruction is a blockage in the tubes that carry bile to the gallbladder and small intestine. However, Bile duct ligation results in the changes of bile acids in serum, liver, urine, and feces1, 2. In this work, we demonstrate a novel technique to image this pathological condition by using a newly developed in vivo imaging system, which includes multiphoton microscopy and intravital hepatic imaging chamber. The images we acquired demonstrate the uptake, processing of 6-CFDA in hepatocytes and excretion of CF in the bile canaliculi. In addition to imaging, we can also measure kinetics of the green fluorescence intensity.
Acousto-optic laser projection systems for displaying TV information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulyaev, Yu V; Kazaryan, M A; Mokrushin, Yu M
2015-04-30
This review addresses various approaches to television projection imaging on large screens using lasers. Results are presented of theoretical and experimental studies of an acousto-optic projection system operating on the principle of projecting an image of an entire amplitude-modulated television line in a single laser pulse. We consider characteristic features of image formation in such a system and the requirements for its individual components. Particular attention is paid to nonlinear distortions of the image signal, which show up most severely at low modulation signal frequencies. We discuss the feasibility of improving the process efficiency and image quality using acousto-optic modulatorsmore » and pulsed lasers. Real-time projectors with pulsed line imaging can be used for controlling high-intensity laser radiation. (review)« less
Joint level-set and spatio-temporal motion detection for cell segmentation.
Boukari, Fatima; Makrogiannis, Sokratis
2016-08-10
Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.
Model based estimation of image depth and displacement
NASA Technical Reports Server (NTRS)
Damour, Kevin T.
1992-01-01
Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.
Kostopoulos, Spiros A; Asvestas, Pantelis A; Kalatzis, Ioannis K; Sakellaropoulos, George C; Sakkis, Theofilos H; Cavouras, Dionisis A; Glotsos, Dimitris T
2017-09-01
The aim of this study was to propose features that evaluate pictorial differences between melanocytic nevus (mole) and melanoma lesions by computer-based analysis of plain photography images and to design a cross-platform, tunable, decision support system to discriminate with high accuracy moles from melanomas in different publicly available image databases. Digital plain photography images of verified mole and melanoma lesions were downloaded from (i) Edinburgh University Hospital, UK, (Dermofit, 330moles/70 melanomas, under signed agreement), from 5 different centers (Multicenter, 63moles/25 melanomas, publicly available), and from the Groningen University, Netherlands (Groningen, 100moles/70 melanomas, publicly available). Images were processed for outlining the lesion-border and isolating the lesion from the surrounding background. Fourteen features were generated from each lesion evaluating texture (4), structure (5), shape (4) and color (1). Features were subjected to statistical analysis for determining differences in pictorial properties between moles and melanomas. The Probabilistic Neural Network (PNN) classifier, the exhaustive search features selection, the leave-one-out (LOO), and the external cross-validation (ECV) methods were used to design the PR-system for discriminating between moles and melanomas. Statistical analysis revealed that melanomas as compared to moles were of lower intensity, of less homogenous surface, had more dark pixels with intensities spanning larger spectra of gray-values, contained more objects of different sizes and gray-levels, had more asymmetrical shapes and irregular outlines, had abrupt intensity transitions from lesion to background tissue, and had more distinct colors. The PR-system designed by the Dermofit images scored on the Dermofit images, using the ECV, 94.1%, 82.9%, 96.5% for overall accuracy, sensitivity, specificity, on the Multicenter Images 92.0%, 88%, 93.7% and on the Groningen Images 76.2%, 73.9%, 77.8% respectively. The PR-system as designed by the Dermofit image database could be fine-tuned to classify with good accuracy plain photography moles/melanomas images of other databases employing different image capturing equipment and protocols. Copyright © 2017 Elsevier B.V. All rights reserved.
Dual-mode transducers for ultrasound imaging and thermal therapy.
Owen, N R; Chapelon, J Y; Bouchoux, G; Berriet, R; Fleury, G; Lafon, C
2010-02-01
Medical imaging is a vital component of high intensity focused ultrasound (HIFU) therapy, which is gaining clinical acceptance for tissue ablation and cancer therapy. Imaging is necessary to plan and guide the application of therapeutic ultrasound, and to monitor the effects it induces in tissue. Because they can transmit high intensity continuous wave ultrasound for treatment and pulsed ultrasound for imaging, dual-mode transducers aim to improve the guidance and monitoring stages. Their primary advantage is implicit registration between the imaging and treatment axes, and so they can help ensure before treatment that the therapeutic beam is correctly aligned with the planned treatment volume. During treatment, imaging signals can be processed in real-time to assess acoustic properties of the tissue that are related to thermal ablation. Piezocomposite materials are favorable for dual-mode transducers because of their improved bandwidth, which in turn improves imaging performance while maintaining high efficiency for treatment. Here we present our experiences with three dual-mode transducers for interstitial applications. The first was an 11-MHz monoelement designed for use in the bile duct. It had a 25x7.5 mm(2) aperture that was cylindrically focused to 10mm. The applicator motion was step-wise rotational for imaging and therapy over a 360 degrees, or smaller, sector. The second transducer had 5-elements, each measuring 3.0x3.8 mm(2) for a total aperture of 3.0x20 mm(2). It operated at 5.6 MHz, was cylindrically focused to 14 mm, and was integrated with a servo-controlled oscillating probe designed for sector imaging and directive therapy in the liver. The last transducer was a 5-MHz, 64-element linear array designed for beam-formed imaging and therapy. The aperture was 3.0x18 mm(2) with a pitch of 0.280 mm. Characterization results included conversion efficiencies above 50%, pulse-echo bandwidths above 50%, surface intensities up to 30 W/cm(2), and axial imaging resolutions to 0.2 mm. The second transducer was evaluated in vivo using porcine liver, where coagulation necrosis was induced up to a depth of 20 mm in 120 s. B-mode and M-mode images displayed a hypoechoic region that agreed well with lesion depth observed by gross histology. These feasibility studies demonstrate that the dual-mode transducers had imaging performance that was sufficient to aid the guidance and monitoring of treatment, and could sustain high intensities to induce coagulation necrosis in vivo.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Jian; Kim, Minho; Peters, Jorg
2009-12-15
Purpose: Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl's similarity measure evaluation protocol. Methods: The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, themore » similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph (DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient. Results: Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a stricter termination condition for an optimization method. The phantom study using MV imaging indicated that PI, GD, and GC have the best accuracy, while NCC and NMI have the best robustness. The clinical study using MV imaging showed that NCC and NMI have the best robustness. Conclusions: The authors evaluated the performance of seven similarity measures for use in 2D-3D image registration using the variation in Skerl's similarity measure evaluation protocol. The generalized methodology can be used to select the best similarity measures, determine the optimal or near optimal choice of parameter, and choose the appropriate registration strategy for the end user in his specific registration applications in medical imaging.« less
The effect of light intensity on image quality in endoscopic ear surgery.
McCallum, R; McColl, J; Iyer, A
2018-05-16
Endoscopic ear surgery is a rapidly developing field with many advantages. But endoscopes can reach temperatures of over 110°C at the tip, raising safety concerns. Reducing the intensity of the light source reduces temperatures produced. However, quality of images at lower light intensities has not yet been studied. We set out to study the effect of light intensity on image quality in EES. Prospective study of patients undergoing EES from April to October 2016. Consecutive images of the same operative field at 10%, 30%, 50% and 100% light intensities were taken. Eight international experts were asked to each evaluate 100 anonymised, randomised images. District General Hospital. Twenty patients. Images were evaluated on a 5-point Likert scale (1 = significantly worse than average; 5 = significantly better than average) for detail of anatomy; colour contrast; overall quality; and suitability for operating. Mean scores for photographs at 10%, 30%, 50% and 100% light intensity were 3.22 (SD 0.93), 3.15 (SD 0.84), 3.08 (SD 0.88) and 3.10 (SD 0.86), respectively. In ANOVA models for the scores on each of the scales (anatomy, colour contrast, overall quality and suitability for operating), the effects of rater and patient were highly significant (P < .0005) but light intensity was non-significant (P = .34, .32, .21, .15, respectively). Images taken during surgery by our endoscope and operative camera have no loss of quality when taken at lower light intensities. We recommend the surgeon considers use of lower light intensities in endoscopic ear surgery. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.
2017-07-01
Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.
NASA Astrophysics Data System (ADS)
Scherbak, Aleksandr; Yulmetova, Olga
2018-05-01
A pulsed fiber laser with the wavelength 1.06 μm was used to treat titanium nitride film deposited on beryllium substrates in the air with intensities below an ablation threshold to provide oxide formation. Laser oxidation results were predicted by the chemical thermodynamic method and confirmed by experimental techniques (X-ray diffraction). The developed technology of contrast image formation is intended to be used for optoelectronic read-out system.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
NASA Astrophysics Data System (ADS)
Hildreth, E. C.
1985-09-01
For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.
Three-dimensional nanometre localization of nanoparticles to enhance super-resolution microscopy
NASA Astrophysics Data System (ADS)
Bon, Pierre; Bourg, Nicolas; Lécart, Sandrine; Monneret, Serge; Fort, Emmanuel; Wenger, Jérôme; Lévêque-Fort, Sandrine
2015-07-01
Meeting the nanometre resolution promised by super-resolution microscopy techniques (pointillist: PALM, STORM, scanning: STED) requires stabilizing the sample drifts in real time during the whole acquisition process. Metal nanoparticles are excellent probes to track the lateral drifts as they provide crisp and photostable information. However, achieving nanometre axial super-localization is still a major challenge, as diffraction imposes large depths-of-fields. Here we demonstrate fast full three-dimensional nanometre super-localization of gold nanoparticles through simultaneous intensity and phase imaging with a wavefront-sensing camera based on quadriwave lateral shearing interferometry. We show how to combine the intensity and phase information to provide the key to the third axial dimension. Presently, we demonstrate even in the occurrence of large three-dimensional fluctuations of several microns, unprecedented sub-nanometre localization accuracies down to 0.7 nm in lateral and 2.7 nm in axial directions at 50 frames per second. We demonstrate that nanoscale stabilization greatly enhances the image quality and resolution in direct stochastic optical reconstruction microscopy imaging.
NASA Astrophysics Data System (ADS)
Hall, David J.; Han, Sung-Ho; Dugan, Laura
2009-02-01
Reactive oxygen species (ROS) are believed to be involved in many diseases and injuries to the brain, but the molecular processes are not well understood due to a lack of in vivo imaging techniques to evaluate ROS. The fluorescent oxidation products of dihydroethidium (DHE) can monitor ROS production in vivo. Here we demonstrate the novel optical imaging of brain in live mice to measure ROS production via generation of fluorescent DHE oxidation products (ox-DHE) by ROS. We show that in Sod2+/- mice, which have partial loss of a key antioxidant enzyme, superoxide dismutase-2, that ox-DHE fluorescence intensity was significantly higher than in hSOD1 mice, which have four-fold overexpression of superoxide dismutase-1 activity, which had almost no ox-DHE fluorescence, confirming specificity of ox-DHE to ROS production. The DHE oxidation products were also confirmed by detecting a characteristic fluorescence lifetime of the oxidation product, which was validated with ex vivo measurements.
NASA Astrophysics Data System (ADS)
Zahedi, Sulmaz
This study aims to prove the feasibility of using Ultrasound-Guided High Intensity Focused Ultrasound (USg-HIFU) to create thermal lesions in neurosurgical applications, allowing for precise ablation of brain tissue, while simultaneously providing real time imaging. To test the feasibility of the system, an optically transparent HIFU compatible tissue-mimicking phantom model was produced. USg-HIFU was then used for ablation of the phantom, with and without targets. Finally, ex vivo lamb brain tissue was imaged and ablated using the USg-HIFU system. Real-time ultrasound images and videos obtained throughout the ablation process showing clear lesion formation at the focal point of the HIFU transducer. Post-ablation gross and histopathology examinations were conducted to verify thermal and mechanical damage in the ex vivo lamb brain tissue. Finally, thermocouple readings were obtained, and HIFU field computer simulations were conducted to verify findings. Results of the study concluded reproducibility of USg-HIFU thermal lesions for neurosurgical applications.
Van de Moortele, Pierre-François; Auerbach, Edwards J; Olman, Cheryl; Yacoub, Essa; Uğurbil, Kâmil; Moeller, Steen
2009-06-01
At high magnetic field, MR images exhibit large, undesirable signal intensity variations commonly referred to as "intensity field bias". Such inhomogeneities mostly originate from heterogeneous RF coil B(1) profiles and, with no appropriate correction, are further pronounced when utilizing rooted sum of square reconstruction with receive coil arrays. These artifacts can significantly alter whole brain high resolution T(1)-weighted (T(1)w) images that are extensively utilized for clinical diagnosis, for gray/white matter segmentation as well as for coregistration with functional time series. In T(1) weighted 3D-MPRAGE sequences, it is possible to preserve a bulk amount of T(1) contrast through space by using adiabatic inversion RF pulses that are insensitive to transmit B(1) variations above a minimum threshold. However, large intensity variations persist in the images, which are significantly more difficult to address at very high field where RF coil B(1) profiles become more heterogeneous. Another characteristic of T(1)w MPRAGE sequences is their intrinsic sensitivity to Proton Density and T(2)(*) contrast, which cannot be removed with post-processing algorithms utilized to correct for receive coil sensitivity. In this paper, we demonstrate a simple technique capable of producing normalized, high resolution T(1)w 3D-MPRAGE images that are devoid of receive coil sensitivity, Proton Density and T(2)(*) contrast. These images, which are suitable for routinely obtaining whole brain tissue segmentation at 7 T, provide higher T(1) contrast specificity than standard MPRAGE acquisitions. Our results show that removing the Proton Density component can help in identifying small brain structures and that T(2)(*) induced artifacts can be removed from the images. The resulting unbiased T(1)w images can also be used to generate Maximum Intensity Projection angiograms, without additional data acquisition, that are inherently registered with T(1)w structural images. In addition, we introduce a simple technique to reduce residual signal intensity variations induced by transmit B(1) heterogeneity. Because this approach requires two 3D images, one divided with the other, head motion could create serious problems, especially at high spatial resolution. To alleviate such inter-scan motion problems, we developed a new sequence where the two contrast acquisitions are interleaved within a single scan. This interleaved approach however comes with greater risk of intra-scan motion issues because of a longer single scan time. Users can choose between these two trade offs depending on specific protocols and patient populations. We believe that the simplicity and the robustness of this double contrast based approach to address intensity field bias at high field and improve T(1) contrast specificity, together with the capability of simultaneously obtaining angiography maps, advantageously counter balance the potential drawbacks of the technique, mainly a longer acquisition time and a moderate reduction in signal to noise ratio.
Van de Moortele, Pierre-François; Auerbach, Edwards J.; Olman, Cheryl; Yacoub, Essa; Uğurbil, Kâmil; Moeller, Steen
2009-01-01
At high magnetic field, MR images exhibit large, undesirable signal intensity variations commonly referred to as “intensity field bias”. Such inhomogeneities mostly originate from heterogeneous RF coil B1 profiles and, with no appropriate correction, are further pronounced when utilizing rooted sum of square reconstruction with receive coil arrays. These artifacts can significantly alter whole brain high resolution T1-weighted (T1w) images that are extensively utilized for clinical diagnosis, for gray/white matter segmentation as well as for coregistration with functional time series. In T1 weighted 3D-MPRAGE sequences, it is possible to preserve a bulk amount of T1 contrast through space by using adiabatic inversion RF pulses that are insensitive to transmit B1 variations above a minimum threshold. However, large intensity variations persist in the images, which are significantly more difficult to address at very high field where RF coil B1 profiles become more heterogeneous. Another characteristic of T1w MPRAGE sequences is their intrinsic sensitivity to Proton Density and T2* contrast, which cannot be removed with post-processing algorithms utilized to correct for receive coil sensitivity. In this paper, we demonstrate a simple technique capable of producing normalized, high resolution T1w 3D-MPRAGE images that are devoid of receive coil sensitivity, Proton Density and T2* contrast. These images, which are suitable for routinely obtaining whole brain tissue segmentation at 7 Tesla, provide higher T1 contrast specificity than standard MPRAGE acquisitions. Our results show that removing the Proton Density component can help identifying small brain structures and that T2* induced artifacts can be removed from the images. The resulting unbiased T1w images can also be used to generate Maximum Intensity Projection angiograms, without additional data acquisition, that are inherently registered with T1w structural images. In addition, we introduce a simple technique to reduce residual signal intensity variations induced by Transmit B1 heterogeneity. Because this approach requires two 3D images, one divided with the other, head motion could create serious problems, especially at high spatial resolution. To alleviate such inter-scan motion problems, we developed a new sequence where the two contrast acquisitions are interleaved within a single scan. This interleaved approach however comes with greater risk of intra-scan motion issues because of a longer single scan time. Users can choose between these two trade offs depending on specific protocols and patient populations. We believe that the simplicity and the robustness of this double contrast based approach to address intensity field bias at high field and improve T1 contrast specificity, together with the capability of simultaneously obtaining angiography maps, advantageously counter balance the potential drawbacks of the technique, mainly a longer acquisition time and a moderate reduction in signal to noise ratio. PMID:19233292
Characteristics of nonlinear imaging of broadband laser stacked by chirped pulses
NASA Astrophysics Data System (ADS)
Wang, Youwen; You, Kaiming; Chen, Liezun; Lu, Shizhuan; Dai, Zhiping; Ling, Xiaohui
2014-11-01
Nanosecond-level pulses of specific shape is usually generated by stacking chirped pulses for high-power inertial confinement fusion driver, in which nonlinear imaging of scatterers may damage precious optical elements. We present a numerical study of the characteristics of nonlinear imaging of scatterers in broadband laser stacked by chirped pulses to disclose the dependence of location and intensity of images on the parameters of the stacked pulse. It is shown that, for sub-nanosecond long sub-pulses with chirp or transform-limited sub-pulses, the time-mean intensity and location of images through normally dispersive and anomalously dispersive self-focusing medium slab are almost identical; While for picosecond-level short sub-pulses with chirp, the time-mean intensity of images for weak normal dispersion is slightly higher than that for weak anomalous dispersion through a thin nonlinear slab; the result is opposite to that for strong dispersion in a thick nonlinear slab; Furthermore, for given time delay between neighboring sub-pulses, the time-mean intensity of images varies periodically with chirp of the sub-pulse increasing; for a given pulse width of sub-pulse, the time-mean intensity of images decreases with the time delay between neighboring sub-pulses increasing; additionally, there is a little difference in the time-mean intensity of images of the laser stacked by different numbers of sub-pulses. Finally, the obtained results are also given physical explanations.
Parsons, Matthew S; Sharma, Aseem; Hildebolt, Charles
2018-06-12
To test whether an image-processing algorithm can aid in visualization of mesial temporal sclerosis on magnetic resonance imaging by selectively increasing contrast-to-noise ratio (CNR) between abnormal hippocampus and normal brain. In this Institutional Review Board-approved and Health Insurance Portability and Accountability Act-compliant study, baseline coronal fluid-attenuated inversion recovery images of 18 adults (10 females, eight males; mean age 41.2 years) with proven mesial temporal sclerosis were processed using a custom algorithm to produce corresponding enhanced images. Average (Hmean) and maximum (Hmax) CNR for abnormal hippocampus were calculated relative to normal ipsilateral white matter. CNR values for normal gray matter (GM) were similarly calculated using ipsilateral cingulate gyrus as the internal control. To evaluate effect of image processing on visual conspicuity of hippocampal signal alteration, a neuroradiologist masked to the side of hippocampal abnormality rated signal intensity (SI) of hippocampi on baseline and enhanced images using a five-point scale (definitely abnormal to definitely normal). Differences in Hmean, Hmax, GM, and SI ratings for abnormal hippocampi on baseline and enhanced images were assessed for statistical significance. Both Hmean and Hmax were significantly higher in enhanced images as compared to baseline images (p < 0.0001 for both). There was no significant difference in the GM between baseline and enhanced images (p = 0.9375). SI ratings showed a more confident identification of abnormality on enhanced images (p = 0.0001). Image-processing resulted in increased CNR of abnormal hippocampus without affecting the CNR of normal gray matter. This selective increase in conspicuity of abnormal hippocampus was associated with more confident identification of hippocampal signal alteration. Copyright © 2018 Academic Radiology. Published by Elsevier Inc. All rights reserved.
Grid Computing Application for Brain Magnetic Resonance Image Processing
NASA Astrophysics Data System (ADS)
Valdivia, F.; Crépeault, B.; Duchesne, S.
2012-02-01
This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.
NASA Astrophysics Data System (ADS)
Ward, Jacob Wolfgang; Nave, Gillian
2016-01-01
Recent measurements of four times ionized iron and nickel (Fe V & Ni V) wavelengths in the vacuum ultraviolet (VUV) have been taken using the National Institute for Standards and Technology (NIST) Normal Incidence Vacuum Spectrograph (NIVS) with a sliding spark light source with invar electrodes. The wavelengths observed in those measurements make use of high resolution photographic plates with the majority of observed lines having uncertainties of approximately 3mÅ. In addition to observations made with photographic plates, the same wavelength region was observed with phosphor image plates, which have been demonstrated to be accurate as a method of intensity calibration when used with a deuterium light source. This work will evaluate the use of phosphor image plates and deuterium lamps as an intensity calibration method for the Ni V spectrum in the 1200-1600Å region of the VUV. Additionally, by pairing the observed wavelengths of Ni V with accurate line intensities, it is possible to create an energy level optimization for Ni V providing high accuracy Ritz wavelengths. This process has previously been applied to Fe V and produced Ritz wavelengths that agreed with the above experimental observations.
Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C
1997-01-01
The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.
Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen
2006-04-01
Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.
Study on the luminous characteristics of a natural ball lightning
NASA Astrophysics Data System (ADS)
Wang, Hao; Yuan, Ping; Cen, Jianyong; Liu, Guorong
2018-02-01
According to the optical images of the whole process of a natural ball lightning recorded by two slit-less spectrographs in the Qinghai plateau of China, the simulated observation experiment on the luminous intensity of the spherical light source was carried out. The luminous intensity and the optical power of the natural ball lightning in the wavelength range of 400-690 nm were estimated based on the experimental data and the Lambert-Beer Law. The results show that the maximum luminous intensity was about 1.24 × 105 cd in the initial stage of the natural ball lightning, and the maximum luminous intensity and the maximum optical power in most time of its life were about 5.9 × 104 cd and 4.2 × 103 W, respectively.
Molecular Ultrasound Imaging for the Detection of Neural Inflammation
NASA Astrophysics Data System (ADS)
Volz, Kevin R.
Molecular imaging is a form of nanotechnology that enables the noninvasive examination of biological processes in vivo. Radiopharmaceutical agents are used to selectively target biochemical markers, which permits their detection and evaluation. Early visualization of molecular variations indicative of pathophysiological processes can aid in patient diagnoses and management decisions. Molecular imaging is performed by introducing molecular probes into the body. Molecular probes are often contrast agents that have been nanoengineered to selectively target and tether to molecules, enabling their radiologic identification. Ultrasound contrast agents have been demonstrated as an effective method of detecting perfusion at the tissue level. Through a nanoengineering process, ultrasound contrast agents can be targeted to specific molecules, thereby extending ultrasound's capabilities from the tissue to molecular level. Molecular ultrasound, or targeted contrast enhanced ultrasound (TCEUS), has recently emerged as a popular molecular imaging technique due to its ability to provide real-time anatomical and functional information in the absence of ionizing radiation. However, molecular ultrasound represents a novel form of molecular imaging, and consequently remains largely preclinical. A review of the TCEUS literature revealed multiple preclinical studies demonstrating its success in detecting inflammation in a variety of tissues. Although, a gap was identified in the existing evidence, as TCEUS effectiveness for detection of neural inflammation in the spinal cord was unable to be uncovered. This gap in knowledge, coupled with the profound impacts that this TCEUS application could have clinically, provided rationale for its exploration, and use as contributory evidence for the molecular ultrasound body of literature. An animal model that underwent a contusive spinal cord injury was used to establish preclinical evidence of TCEUS to detect neural inflammation. Imaging was performed while targeting three early inflammatory markers (P-selectin, VCAM-1, ICAM-1). Imaging protocols and outcome measures of previous TCEUS investigations of inflammation were replicated to aid in comparisons of outcomes. Signal intensity data was used to generate time intensity curves for qualitative and quantitative analysis of contrast agent temporal behavior. A proof of principle study established preclinical evidence to support the ability of TCEUS to detect acute neural inflammation. Substantial increases in signal intensities were observed while targeting inflammatory markers compared to controls. Further investigations consisted of examining molecular ultrasound sensitivity, and were accomplished by examining targeted contrast agent dosing parameters, and the ability of TCEUS to longitudinally evaluate neural inflammation. Qualitative analysis of TCEUS imaging performed with both administered doses revealed marked increases in signal intensities during acute inflammation, where inflammatory marker expression was presumably at its highest. This was in comparison to measures obtained in the absence of, and during, chronic inflammation. This research contributes much needed empirical evidence to the molecular ultrasound body of literature, and represents the first steps towards advancing this TCEUS application to clinical practice. Future studies are necessary to further these findings and effectively build upon this evidence. Increasing evidence of TCEUS use for the detection of neural inflammation will aid in its eventual clinical translation, where it will likely have a positive impact on patient care.
Ochoa-Marín, Sandra C; Cristancho-Marulanda, Sergio; González-López, José Rafael
2011-04-01
Analysing the self-image and social image of migrants' female partners (MFP) and their relationship with the search for sexual and reproductive health services (SRHS) in communities having a high US migratory intensity index. 60 MFP were subjected to in-depth interviews between October 2004 and May 2005 and 19 semi-structured interviews were held with members of their families, 14 representatives from social organisations, 10 health service representatives and 31 men and women residing in the community. MFP self-image and social image regards women as being "vulnerable", "alone", "lacking sexual partner" and thus being sexually inactive. Consequently, "they must not contract sexually-transmitted diseases (STD), use contraceptives or become pregnant" when their partners are in the USA. The search for SRHS services was found to be related to self-image, social image and the notion of family or social control predominated in the behaviour expected for these women which, in turn, was related to conditions regarding their coexistence (or not) with their families. MFP living with their family or their partner's family were subject to greater "family" control in their search for SRHS services. On the contrary, MFP living alone were subjected to greater "social" control over such process. Sexuallyinactive women's self-image and social image seems to have a bearing on such women's social behaviour and could become an obstacle to the timely search for SRHS services in communities having high migratory intensity.
Lee, H.R.
1997-11-18
A three-dimensional image reconstruction method comprises treating the object of interest as a group of elements with a size that is determined by the resolution of the projection data, e.g., as determined by the size of each pixel. One of the projections is used as a reference projection. A fictitious object is arbitrarily defined that is constrained by such reference projection. The method modifies the known structure of the fictitious object by comparing and optimizing its four projections to those of the unknown structure of the real object and continues to iterate until the optimization is limited by the residual sum of background noise. The method is composed of several sub-processes that acquire four projections from the real data and the fictitious object: generate an arbitrary distribution to define the fictitious object, optimize the four projections, generate a new distribution for the fictitious object, and enhance the reconstructed image. The sub-process for the acquisition of the four projections from the input real data is simply the function of acquiring the four projections from the data of the transmitted intensity. The transmitted intensity represents the density distribution, that is, the distribution of absorption coefficients through the object. 5 figs.