Sample records for image correlation algorithm

  1. Autofocus algorithm using one-dimensional Fourier transform and Pearson correlation

    NASA Astrophysics Data System (ADS)

    Bueno Mario, A.; Alvarez-Borrego, Josue; Acho, L.

    2004-10-01

    A new autofocus algorithm based on one-dimensional Fourier transform and Pearson correlation for Z automatized microscope is proposed. Our goal is to determine in fast response time and accuracy, the best focused plane through an algorithm. We capture in bright and dark field several images set at different Z distances from biological organism sample. The algorithm uses the one-dimensional Fourier transform to obtain the image frequency content of a vectors pattern previously defined comparing the Pearson correlation of these frequency vectors versus the reference image frequency vector, the most out of focus image, we find the best focusing. Experimental results showed the algorithm has fast response time and accuracy in getting the best focus plane from captured images. In conclusions, the algorithm can be implemented in real time systems due fast response time, accuracy and robustness. The algorithm can be used to get focused images in bright and dark field and it can be extended to include fusion techniques to construct multifocus final images beyond of this paper.

  2. A Palmprint Recognition Algorithm Using Phase-Only Correlation

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.

  3. Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures

    NASA Astrophysics Data System (ADS)

    Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.

    2017-03-01

    The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.

  4. Phi-s correlation and dynamic time warping - Two methods for tracking ice floes in SAR images

    NASA Technical Reports Server (NTRS)

    Mcconnell, Ross; Kober, Wolfgang; Kwok, Ronald; Curlander, John C.; Pang, Shirley S.

    1991-01-01

    The authors present two algorithms for performing shape matching on ice floe boundaries in SAR (synthetic aperture radar) images. These algorithms quickly produce a set of ice motion and rotation vectors that can be used to guide a pixel value correlator. The algorithms match a shape descriptor known as the Phi-s curve. The first algorithm uses normalized correlation to match the Phi-s curves, while the second uses dynamic programming to compute an elastic match that better accommodates ice floe deformation. Some empirical data on the performance of the algorithms on Seasat SAR images are presented.

  5. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  6. Noise-immune complex correlation for optical coherence angiography based on standard and Jones matrix optical coherence tomography

    PubMed Central

    Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Miura, Masahiro; Yasuno, Yoshiaki

    2016-01-01

    This paper describes a complex correlation mapping algorithm for optical coherence angiography (cmOCA). The proposed algorithm avoids the signal-to-noise ratio dependence and exhibits low noise in vasculature imaging. The complex correlation coefficient of the signals, rather than that of the measured data are estimated, and two-step averaging is introduced. Algorithms of motion artifact removal based on non perfusing tissue detection using correlation are developed. The algorithms are implemented with Jones-matrix OCT. Simultaneous imaging of pigmented tissue and vasculature is also achieved using degree of polarization uniformity imaging with cmOCA. An application of cmOCA to in vivo posterior human eyes is presented to demonstrate that high-contrast images of patients’ eyes can be obtained. PMID:27446673

  7. SU-F-J-77: Variations in the Displacement Vector Fields Calculated by Different Deformable Image Registration Algorithms Used in Helical, Axial and Cone-Beam CT Images of a Mobile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Jaskowiak, J; Ahmad, S

    Purpose: To investigate quantitatively the displacement-vector-fields (DVF) obtained from different deformable image registration algorithms (DIR) in helical (HCT), axial (ACT) and cone-beam CT (CBCT) to register CT images of a mobile phantom and its correlation with motion amplitudes and frequencies. Methods: HCT, ACT and CBCT are used to image a mobile phantom which includes three targets with different sizes that are manufactured from water-equivalent material and embedded in low density foam. The phantom is moved with controlled motion patterns where a range of motion amplitudes (0–40mm) and frequencies (0.125–0.5Hz) are used. The CT images obtained from scanning of the mobilemore » phantom are registered with the stationary CT-images using four deformable image registration algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from DIRART software. Results: The DVF calculated by the different algorithms correlate well with the motion amplitudes that are applied on the mobile phantom where maximal DVF increase linearly with the motion amplitudes of the mobile phantom in CBCT. Similarly in HCT, DVF increase linearly with motion amplitude, however, its correlation is weaker than CBCT. In ACT, the DVF’s do not correlate well with the motion amplitudes where motion induces strong image artifacts and DIR algorithms are not able to deform the ACT image of the mobile targets to the stationary targets. Three DIR-algorithms produce comparable values and patterns of the DVF for certain CT imaging modality. However, DVF from fast-demons deviated strongly from other algorithms at large motion amplitudes. Conclusion: In CBCT and HCT, the DVF correlate well with the motion amplitude of the mobile phantom. However, in ACT, DVF do not correlate with motion amplitudes. Correlations of DVF with motion amplitude as in CBCT and HCT imaging techniques can provide information about unknown motion parameters of the mobile organs in real patients as demonstrated in this phantom visibility study.« less

  8. Adaptive Cross-correlation Algorithm and Experiment of Extended Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.

  9. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  10. Robust mosiacs of close-range high-resolution images

    NASA Astrophysics Data System (ADS)

    Song, Ran; Szymanski, John E.

    2008-03-01

    This paper presents a robust algorithm which relies only on the information contained within the captured images for the construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and, then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic image.

  11. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  12. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  13. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  14. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications

    PubMed Central

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  15. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications.

    PubMed

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation.

  16. Optical image hiding based on computational ghost imaging

    NASA Astrophysics Data System (ADS)

    Wang, Le; Zhao, Shengmei; Cheng, Weiwen; Gong, Longyan; Chen, Hanwu

    2016-05-01

    Imaging hiding schemes play important roles in now big data times. They provide copyright protections of digital images. In the paper, we propose a novel image hiding scheme based on computational ghost imaging to have strong robustness and high security. The watermark is encrypted with the configuration of a computational ghost imaging system, and the random speckle patterns compose a secret key. Least significant bit algorithm is adopted to embed the watermark and both the second-order correlation algorithm and the compressed sensing (CS) algorithm are used to extract the watermark. The experimental and simulation results show that the authorized users can get the watermark with the secret key. The watermark image could not be retrieved when the eavesdropping ratio is less than 45% with the second-order correlation algorithm, whereas it is less than 20% with the TVAL3 CS reconstructed algorithm. In addition, the proposed scheme is robust against the 'salt and pepper' noise and image cropping degradations.

  17. Correlation between automatic detection of malaria on thin film and experts' parasitaemia scores

    NASA Astrophysics Data System (ADS)

    Sunarko, Budi; Williams, Simon; Prescott, William R.; Byker, Scott M.; Bottema, Murk J.

    2017-03-01

    An algorithm was developed to diagnose the presence of malaria and to estimate the depth of infection by automatically counting individual normal and infected erythrocytes in images of thin blood smears. During the training stage, the parameters of the algorithm were optimized to maximize correlation with estimates of parasitaemia from expert human observers. The correlation was tested on a set of 1590 images from seven thin film blood smears. The correlation between the results from the algorithm and expert human readers was r = 0.836. Results indicate that reliable estimates of parasitaemia may be achieved by computational image analysis methods applied to images of thin film smears. Meanwhile, compared to biological experiments, the algorithm fitted well the three high parasitaemia slides and a mid-level parasitaemia slide, and overestimated the three low parasitaemia slides. To improve the parasitaemia estimation, the sources of the overestimation were identified. Emphasis is laid on the importance of further research in order to identify parasites independently of their erythrocyte hosts

  18. Study of image matching algorithm and sub-pixel fitting algorithm in target tracking

    NASA Astrophysics Data System (ADS)

    Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu

    2015-03-01

    Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.

  19. Microseismic reverse time migration with a multi-cross-correlation staining algorithm for fracture imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Congcong; Jia, Xiaofeng; Liu, Shishuo; Zhang, Jie

    2018-02-01

    Accurate characterization of hydraulic fracturing zones is currently becoming increasingly important in production optimization, since hydraulic fracturing may increase the porosity and permeability of the reservoir significantly. Recently, the feasibility of the reverse time migration (RTM) method has been studied for the application in imaging fractures during borehole microseismic monitoring. However, strong low-frequency migration noise, poorly illuminated areas, and the low signal to noise ratio (SNR) data can degrade the imaging results. To improve the quality of the images, we propose a multi-cross-correlation staining algorithm to incorporate into the microseismic reverse time migration for imaging fractures using scattered data. Under the modified RTM method, our results are revealed in two images: one is the improved RTM image using the multi-cross-correlation condition, and the other is an image of the target region using the generalized staining algorithm. The numerical examples show that, compared with the conventional RTM, our method can significantly improve the spatial resolution of images, especially for the image of target region.

  20. Development and Evaluation of Semiautomated Quantification of Lissamine Green Staining of the Bulbar Conjunctiva From Digital Images.

    PubMed

    Bunya, Vatinee Y; Chen, Min; Zheng, Yuanjie; Massaro-Giordano, Mina; Gee, James; Daniel, Ebenezer; O'Sullivan, Ryan; Smith, Eli; Stone, Richard A; Maguire, Maureen G

    2017-10-01

    Lissamine green (LG) staining of the conjunctiva is a key biomarker in evaluating ocular surface disease. The disease currently is assessed using relatively coarse subjective scales. Objective assessment would standardize comparisons over time and between clinicians. To develop a semiautomated, quantitative system to assess lissamine green staining of the bulbar conjunctiva on digital images. Using a standard photography protocol, 35 digital images of the conjunctiva of 11 patients with a diagnosis of dry eye disease based on characteristic signs and symptoms were obtained after topical administration of preservative-free LG, 1%, solution. Images were scored independently by 2 masked ophthalmologists in an academic medical center using the van Bijsterveld and National Eye Institute (NEI) scales. The region of interest was identified by manually marking 7 anatomic landmarks on the images. An objective measure was developed by segmenting the images, forming a vector of key attributes, and then performing a random forest regression. Subjective scores were correlated with the output from a computer algorithm using a cross-validation technique. The ranking of images from least to most staining was compared between the algorithm and the ophthalmologists. The study was conducted from April 26, 2012, through June 2, 2016. Correlation and level of agreement among computerized algorithm scores, van Bijsterveld scale clinical scores, and NEI scale clinical scores. The scores from the automated algorithm correlated well with the mean scores obtained from the gradings of 2 ophthalmologists for the 35 images using the van Bijsterveld scale (Spearman correlation coefficient, rs = 0.79), and moderately with the NEI scale (rs = 0.61) scores. For qualitative ranking of staining, the correlation between the automated algorithm and the 2 ophthalmologists was rs = 0.78 and rs = 0.83. The algorithm performed well when evaluating LG staining of the conjunctiva, as evidenced by good correlation with subjective gradings using 2 different grading scales. Future longitudinal studies are needed to assess the responsiveness of the algorithm to change of conjunctival staining over time.

  1. Image reconstruction through thin scattering media by simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua

    2018-07-01

    An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.

  2. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  3. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage.

    PubMed

    Mennecke, Angelika; Svergun, Stanislav; Scholz, Bernhard; Royalty, Kevin; Dörfler, Arnd; Struffert, Tobias

    2017-01-01

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. • After coiling subarachnoid haemorrhage, metal artefacts seriously reduce FD-CT image quality. • This new metal artefact reduction algorithm is feasible for flat-detector CT. • After coiling, MAR is necessary for diagnostic quality of affected slices. • Slice-wise Pearson correlation is introduced to evaluate improvement of MAR in future studies. • Metal-unaffected parts of image are not modified by this MAR algorithm.

  4. Comparison among Reconstruction Algorithms for Quantitative Analysis of 11C-Acetate Cardiac PET Imaging.

    PubMed

    Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li

    2018-01-01

    Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.

  5. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  6. Multiscale Anomaly Detection and Image Registration Algorithms for Airborne Landmine Detection

    DTIC Science & Technology

    2008-05-01

    with the sensed image. The two- dimensional correlation coefficient r for two matrices A and B both of size M ×N is given by r = ∑ m ∑ n (Amn...correlation based method by matching features in a high- dimensional feature- space . The current implementation of the SIFT algorithm uses a brute-force...by repeatedly convolving the image with a Guassian kernel. Each plane of the scale

  7. On dealing with multiple correlation peaks in PIV

    NASA Astrophysics Data System (ADS)

    Masullo, A.; Theunissen, R.

    2018-05-01

    A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.

  8. EIT image reconstruction with four dimensional regularization.

    PubMed

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  9. TU-D-209-03: Alignment of the Patient Graphic Model Using Fluoroscopic Images for Skin Dose Mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oines, A; Oines, A; Kilian-Meneghin, J

    2016-06-15

    Purpose: The Dose Tracking System (DTS) was developed to provide realtime feedback of skin dose and dose rate during interventional fluoroscopic procedures. A color map on a 3D graphic of the patient represents the cumulative dose distribution on the skin. Automated image correlation algorithms are described which use the fluoroscopic procedure images to align and scale the patient graphic for more accurate dose mapping. Methods: Currently, the DTS employs manual patient graphic selection and alignment. To improve the accuracy of dose mapping and automate the software, various methods are explored to extract information about the beam location and patient morphologymore » from the procedure images. To match patient anatomy with a reference projection image, preprocessing is first used, including edge enhancement, edge detection, and contour detection. Template matching algorithms from OpenCV are then employed to find the location of the beam. Once a match is found, the reference graphic is scaled and rotated to fit the patient, using image registration correlation functions in Matlab. The algorithm runs correlation functions for all points and maps all correlation confidences to a surface map. The highest point of correlation is used for alignment and scaling. The transformation data is saved for later model scaling. Results: Anatomic recognition is used to find matching features between model and image and image registration correlation provides for alignment and scaling at any rotation angle with less than onesecond runtime, and at noise levels in excess of 150% of those found in normal procedures. Conclusion: The algorithm provides the necessary scaling and alignment tools to improve the accuracy of dose distribution mapping on the patient graphic with the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  10. SU-E-J-115: Correlation of Displacement Vector Fields Calculated by Deformable Image Registration Algorithms with Motion Parameters of CT Images with Well-Defined Targets and Controlled-Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaskowiak, J; Ahmad, S; Ali, I

    Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were usedmore » to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased significantly and that limited image quality and poor correlation between the motion amplitude and DVF was obtained.« less

  11. New perspectives in face correlation: discrimination enhancement in face recognition based on iterative algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-04-01

    Here, we report a brief review on the recent developments of correlation algorithms. Several implementation schemes and specific applications proposed in recent years are also given to illustrate powerful applications of these methods. Following a discussion and comparison of the implementation of these schemes, we believe that all-numerical implementation is the most practical choice for application of the correlation method because the advantages of optical processing cannot compensate the technical and/or financial cost needed for an optical implementation platform. We also present a simple iterative algorithm to optimize the training images of composite correlation filters. By making use of three or four iterations, the peak-to-correlation energy (PCE) value of correlation plane can be significantly enhanced. A simulation test using the Pointing Head Pose Image Database (PHPID) illustrates the effectiveness of this statement. Our method can be applied in many composite filters based on linear composition of training images as an optimization means.

  12. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    NASA Astrophysics Data System (ADS)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  13. Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.

    PubMed

    Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J

    2015-06-01

    Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.

  14. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  15. Enhanced Visualization of Subtle Outer Retinal Pathology by En Face Optical Coherence Tomography and Correlation with Multi-Modal Imaging

    PubMed Central

    Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John

    2016-01-01

    Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968

  16. Algorithm for Automatic Segmentation of Nuclear Boundaries in Cancer Cells in Three-Channel Luminescent Images

    NASA Astrophysics Data System (ADS)

    Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.

    2015-09-01

    We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.

  17. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  18. A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing

    NASA Technical Reports Server (NTRS)

    Overmeyer, Austin D.

    2015-01-01

    A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.

  19. Tmax Determined Using a Bayesian Estimation Deconvolution Algorithm Applied to Bolus Tracking Perfusion Imaging: A Digital Phantom Validation Study.

    PubMed

    Uwano, Ikuko; Sasaki, Makoto; Kudo, Kohsuke; Boutelier, Timothé; Kameda, Hiroyuki; Mori, Futoshi; Yamashita, Fumio

    2017-01-10

    The Bayesian estimation algorithm improves the precision of bolus tracking perfusion imaging. However, this algorithm cannot directly calculate Tmax, the time scale widely used to identify ischemic penumbra, because Tmax is a non-physiological, artificial index that reflects the tracer arrival delay (TD) and other parameters. We calculated Tmax from the TD and mean transit time (MTT) obtained by the Bayesian algorithm and determined its accuracy in comparison with Tmax obtained by singular value decomposition (SVD) algorithms. The TD and MTT maps were generated by the Bayesian algorithm applied to digital phantoms with time-concentration curves that reflected a range of values for various perfusion metrics using a global arterial input function. Tmax was calculated from the TD and MTT using constants obtained by a linear least-squares fit to Tmax obtained from the two SVD algorithms that showed the best benchmarks in a previous study. Correlations between the Tmax values obtained by the Bayesian and SVD methods were examined. The Bayesian algorithm yielded accurate TD and MTT values relative to the true values of the digital phantom. Tmax calculated from the TD and MTT values with the least-squares fit constants showed excellent correlation (Pearson's correlation coefficient = 0.99) and agreement (intraclass correlation coefficient = 0.99) with Tmax obtained from SVD algorithms. Quantitative analyses of Tmax values calculated from Bayesian-estimation algorithm-derived TD and MTT from a digital phantom correlated and agreed well with Tmax values determined using SVD algorithms.

  20. Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing

    NASA Astrophysics Data System (ADS)

    Tian, Q.; Fainman, Y.; Lee, Sing H.

    1989-02-01

    The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.

  1. Quantum Color Image Encryption Algorithm Based on A Hyper-Chaotic System and Quantum Fourier Transform

    NASA Astrophysics Data System (ADS)

    Tan, Ru-Chao; Lei, Tong; Zhao, Qing-Min; Gong, Li-Hua; Zhou, Zhi-Hong

    2016-12-01

    To improve the slow processing speed of the classical image encryption algorithms and enhance the security of the private color images, a new quantum color image encryption algorithm based on a hyper-chaotic system is proposed, in which the sequences generated by the Chen's hyper-chaotic system are scrambled and diffused with three components of the original color image. Sequentially, the quantum Fourier transform is exploited to fulfill the encryption. Numerical simulations show that the presented quantum color image encryption algorithm possesses large key space to resist illegal attacks, sensitive dependence on initial keys, uniform distribution of gray values for the encrypted image and weak correlation between two adjacent pixels in the cipher-image.

  2. Self-calibration of a noisy multiple-sensor system with genetic algorithms

    NASA Astrophysics Data System (ADS)

    Brooks, Richard R.; Iyengar, S. Sitharama; Chen, Jianhua

    1996-01-01

    This paper explores an image processing application of optimization techniques which entails interpreting noisy sensor data. The application is a generalization of image correlation; we attempt to find the optimal gruence which matches two overlapping gray-scale images corrupted with noise. Both taboo search and genetic algorithms are used to find the parameters which match the two images. A genetic algorithm approach using an elitist reproduction scheme is found to provide significantly superior results. The presentation includes a graphic presentation of the paths taken by tabu search and genetic algorithms when trying to find the best possible match between two corrupted images.

  3. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  4. An overview of the stereo correlation and triangulation formulations used in DICe.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Daniel Z.

    This document provides a detailed overview of the stereo correlation algorithm and triangulation formulation used in the Digital Image Correlation Engine (DICe) to triangulate three dimensional motion in space given the image coordinates and camera calibration parameters.

  5. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  6. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  7. Tensor Fukunaga-Koontz transform for small target detection in infrared images

    NASA Astrophysics Data System (ADS)

    Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli

    2016-09-01

    Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.

  8. Joint reconstruction of multiview compressed images.

    PubMed

    Thirumalai, Vijayaraghavan; Frossard, Pascal

    2013-05-01

    Distributed representation of correlated multiview images is an important problem that arises in vision sensor networks. This paper concentrates on the joint reconstruction problem where the distributively compressed images are decoded together in order to take benefit from the image correlation. We consider a scenario where the images captured at different viewpoints are encoded independently using common coding solutions (e.g., JPEG) with a balanced rate distribution among different cameras. A central decoder first estimates the inter-view image correlation from the independently compressed data. The joint reconstruction is then cast as a constrained convex optimization problem that reconstructs total-variation (TV) smooth images, which comply with the estimated correlation model. At the same time, we add constraints that force the reconstructed images to be as close as possible to their compressed versions. We show through experiments that the proposed joint reconstruction scheme outperforms independent reconstruction in terms of image quality, for a given target bit rate. In addition, the decoding performance of our algorithm compares advantageously to state-of-the-art distributed coding schemes based on motion learning and on the DISCOVER algorithm.

  9. Implementation of digital image encryption algorithm using logistic function and DNA encoding

    NASA Astrophysics Data System (ADS)

    Suryadi, MT; Satria, Yudi; Fauzi, Muhammad

    2018-03-01

    Cryptography is a method to secure information that might be in form of digital image. Based on past research, in order to increase security level of chaos based encryption algorithm and DNA based encryption algorithm, encryption algorithm using logistic function and DNA encoding was proposed. Digital image encryption algorithm using logistic function and DNA encoding use DNA encoding to scramble the pixel values into DNA base and scramble it in DNA addition, DNA complement, and XOR operation. The logistic function in this algorithm used as random number generator needed in DNA complement and XOR operation. The result of the test show that the PSNR values of cipher images are 7.98-7.99 bits, the entropy values are close to 8, the histogram of cipher images are uniformly distributed and the correlation coefficient of cipher images are near 0. Thus, the cipher image can be decrypted perfectly and the encryption algorithm has good resistance to entropy attack and statistical attack.

  10. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  11. Parallel image logical operations using cross correlation

    NASA Technical Reports Server (NTRS)

    Strong, J. P., III

    1972-01-01

    Methods are presented for counting areas in an image in a parallel manner using noncoherent optical techniques. The techniques presented include the Levialdi algorithm for counting, optical techniques for binary operations, and cross-correlation.

  12. Image stitching and image reconstruction of intestines captured using radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau

    2012-05-01

    This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.

  13. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  14. Microscopic image analysis for reticulocyte based on watershed algorithm

    NASA Astrophysics Data System (ADS)

    Wang, J. Q.; Liu, G. F.; Liu, J. G.; Wang, G.

    2007-12-01

    We present a watershed-based algorithm in the analysis of light microscopic image for reticulocyte (RET), which will be used in an automated recognition system for RET in peripheral blood. The original images, obtained by micrography, are segmented by modified watershed algorithm and are recognized in term of gray entropy and area of connective area. In the process of watershed algorithm, judgment conditions are controlled according to character of the image, besides, the segmentation is performed by morphological subtraction. The algorithm was simulated with MATLAB software. It is similar for automated and manual scoring and there is good correlation(r=0.956) between the methods, which is resulted from 50 pieces of RET images. The result indicates that the algorithm for peripheral blood RETs is comparable to conventional manual scoring, and it is superior in objectivity. This algorithm avoids time-consuming calculation such as ultra-erosion and region-growth, which will speed up the computation consequentially.

  15. Optimized atom position and coefficient coding for matching pursuit-based image compression.

    PubMed

    Shoa, Alireza; Shirani, Shahram

    2009-12-01

    In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.

  16. A convergence algorithm for correlation of breech face images based on the congruent matching cells (CMC) method.

    PubMed

    Chen, Zhe; Song, John; Chu, Wei; Soons, Johannes A; Zhao, Xuezeng

    2017-11-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for accurate firearm evidence identification and error rate estimation. The CMC method is based on the principle of discretization. The toolmark image of the reference sample is divided into correlation cells. Each cell is registered to the cell-sized area of the compared image that has maximum surface topography similarity. For each resulting cell pair, one parameter quantifies the similarity of the cell surface topography and three parameters quantify the pattern congruency of the registration position and orientation. An identification (declared match) requires a significant number of CMCs, that is, cell pairs that meet both similarity and pattern congruency requirements. The use of cell correlations reduces the effects of "invalid regions" in the compared image pairs and increases the correlation accuracy. The identification accuracy of the CMC method can be further improved by considering a feature named "convergence," that is, the tendency of the x-y registration positions of the correlated cell pairs to converge at the correct registration angle when comparing same-source samples at different relative orientations. In this paper, the difference of the convergence feature between known matching (KM) and known non-matching (KNM) image pairs is characterized, based on which an improved algorithm is developed for breech face image correlations using the CMC method. Its advantage is demonstrated by comparison with three existing CMC algorithms using four datasets. The datasets address three different brands of consecutively manufactured pistol slides, with significant differences in the distribution overlap of cell pair topography similarity for KM and KNM image pairs. For the same CMC threshold values, the convergence algorithm demonstrates noticeably improved results by reducing the number of false-positive or false-negative CMCs in a comparison. Published by Elsevier B.V.

  17. Investigation of undersampling and reconstruction algorithm dependence on respiratory correlated 4D-MRI for online MR-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Mickevicius, Nikolai J.; Paulson, Eric S.

    2017-04-01

    The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.

  18. Measurement of fluid rotation, dilation, and displacement in particle image velocimetry using a Fourier–Mellin cross-correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giarra, Matthew N.; Charonko, John J.; Vlachos, Pavlos P.

    Traditional particle image velocimetry (PIV) uses discrete Cartesian cross correlations (CCs) to estimate the displacements of groups of tracer particles within small subregions of sequentially captured images. However, these CCs fail in regions with large velocity gradients or high rates of rotation. In this paper, we propose a new PIV correlation method based on the Fourier–Mellin transformation (FMT) that enables direct measurement of the rotation and dilation of particle image patterns. In previously unresolvable regions of large rotation, our algorithm significantly improves the velocity estimates compared to traditional correlations by aligning the rotated and stretched particle patterns prior to performingmore » Cartesian correlations to estimate their displacements. Furthermore, our algorithm, which we term Fourier–Mellin correlation (FMC), reliably measures particle pattern displacement between pairs of interrogation regions with up to ±180° of angular misalignment, compared to 6–8° for traditional correlations, and dilation/compression factors of 0.5–2.0, compared to 0.9–1.1 for a single iteration of traditional correlations.« less

  19. Measurement of fluid rotation, dilation, and displacement in particle image velocimetry using a Fourier–Mellin cross-correlation

    DOE PAGES

    Giarra, Matthew N.; Charonko, John J.; Vlachos, Pavlos P.

    2015-02-05

    Traditional particle image velocimetry (PIV) uses discrete Cartesian cross correlations (CCs) to estimate the displacements of groups of tracer particles within small subregions of sequentially captured images. However, these CCs fail in regions with large velocity gradients or high rates of rotation. In this paper, we propose a new PIV correlation method based on the Fourier–Mellin transformation (FMT) that enables direct measurement of the rotation and dilation of particle image patterns. In previously unresolvable regions of large rotation, our algorithm significantly improves the velocity estimates compared to traditional correlations by aligning the rotated and stretched particle patterns prior to performingmore » Cartesian correlations to estimate their displacements. Furthermore, our algorithm, which we term Fourier–Mellin correlation (FMC), reliably measures particle pattern displacement between pairs of interrogation regions with up to ±180° of angular misalignment, compared to 6–8° for traditional correlations, and dilation/compression factors of 0.5–2.0, compared to 0.9–1.1 for a single iteration of traditional correlations.« less

  20. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  1. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  2. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536

  3. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    PubMed

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.

  4. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  5. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    NASA Astrophysics Data System (ADS)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  7. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  8. A kind of graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation

    NASA Astrophysics Data System (ADS)

    Xie, Bing; Duan, Zhemin; Chen, Yu

    2017-11-01

    The mode of navigation based on scene match can assist UAV to achieve autonomous navigation and other missions. However, aerial multi-frame images of the UAV in the complex flight environment easily be affected by the jitter, noise and exposure, which will lead to image blur, deformation and other issues, and result in the decline of detection rate of the interested regional target. Aiming at this problem, we proposed a kind of Graded sub-pixel motion estimation algorithm combining time-domain characteristics with frequency-domain phase correlation. Experimental results prove the validity and accuracy of the proposed algorithm.

  9. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.

    1981-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.

  10. Multimodality imaging of ovarian cystic lesions: Review with an imaging based algorithmic approach

    PubMed Central

    Wasnik, Ashish P; Menias, Christine O; Platt, Joel F; Lalchandani, Usha R; Bedi, Deepak G; Elsayes, Khaled M

    2013-01-01

    Ovarian cystic masses include a spectrum of benign, borderline and high grade malignant neoplasms. Imaging plays a crucial role in characterization and pretreatment planning of incidentally detected or suspected adnexal masses, as diagnosis of ovarian malignancy at an early stage is correlated with a better prognosis. Knowledge of differential diagnosis, imaging features, management trends and an algorithmic approach of such lesions is important for optimal clinical management. This article illustrates a multi-modality approach in the diagnosis of a spectrum of ovarian cystic masses and also proposes an algorithmic approach for the diagnosis of these lesions. PMID:23671748

  11. Development of CD3 cell quantitation algorithms for renal allograft biopsy rejection assessment utilizing open source image analysis software.

    PubMed

    Moon, Andres; Smith, Geoffrey H; Kong, Jun; Rogers, Thomas E; Ellis, Carla L; Farris, Alton B Brad

    2018-02-01

    Renal allograft rejection diagnosis depends on assessment of parameters such as interstitial inflammation; however, studies have shown interobserver variability regarding interstitial inflammation assessment. Since automated image analysis quantitation can be reproducible, we devised customized analysis methods for CD3+ T-cell staining density as a measure of rejection severity and compared them with established commercial methods along with visual assessment. Renal biopsy CD3 immunohistochemistry slides (n = 45), including renal allografts with various degrees of acute cellular rejection (ACR) were scanned for whole slide images (WSIs). Inflammation was quantitated in the WSIs using pathologist visual assessment, commercial algorithms (Aperio nuclear algorithm for CD3+ cells/mm 2 and Aperio positive pixel count algorithm), and customized open source algorithms developed in ImageJ with thresholding/positive pixel counting (custom CD3+%) and identification of pixels fulfilling "maxima" criteria for CD3 expression (custom CD3+ cells/mm 2 ). Based on visual inspections of "markup" images, CD3 quantitation algorithms produced adequate accuracy. Additionally, CD3 quantitation algorithms correlated between each other and also with visual assessment in a statistically significant manner (r = 0.44 to 0.94, p = 0.003 to < 0.0001). Methods for assessing inflammation suggested a progression through the tubulointerstitial ACR grades, with statistically different results in borderline versus other ACR types, in all but the custom methods. Assessment of CD3-stained slides using various open source image analysis algorithms presents salient correlations with established methods of CD3 quantitation. These analysis techniques are promising and highly customizable, providing a form of on-slide "flow cytometry" that can facilitate additional diagnostic accuracy in tissue-based assessments.

  12. Evaluation of an Area-Based matching algorithm with advanced shape models

    NASA Astrophysics Data System (ADS)

    Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.

    2014-04-01

    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.

  13. Chaotic CDMA watermarking algorithm for digital image in FRFT domain

    NASA Astrophysics Data System (ADS)

    Liu, Weizhong; Yang, Wentao; Feng, Zhuoming; Zou, Xuecheng

    2007-11-01

    A digital image-watermarking algorithm based on fractional Fourier transform (FRFT) domain is presented by utilizing chaotic CDMA technique in this paper. As a popular and typical transmission technique, CDMA has many advantages such as privacy, anti-jamming and low power spectral density, which can provide robustness against image distortions and malicious attempts to remove or tamper with the watermark. A super-hybrid chaotic map, with good auto-correlation and cross-correlation characteristics, is adopted to produce many quasi-orthogonal codes (QOC) that can replace the periodic PN-code used in traditional CDAM system. The watermarking data is divided into a lot of segments that correspond to different chaotic QOC respectively and are modulated into the CDMA watermarking data embedded into low-frequency amplitude coefficients of FRFT domain of the cover image. During watermark detection, each chaotic QOC extracts its corresponding watermarking segment by calculating correlation coefficients between chaotic QOC and watermarked data of the detected image. The CDMA technique not only can enhance the robustness of watermark but also can compress the data of the modulated watermark. Experimental results show that the watermarking algorithm has good performances in three aspects: better imperceptibility, anti-attack robustness and security.

  14. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    PubMed

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  15. Self-recovery fragile watermarking algorithm based on SPHIT

    NASA Astrophysics Data System (ADS)

    Xin, Li Ping

    2015-12-01

    A fragile watermark algorithm is proposed, based on SPIHT coding, which can recover the primary image itself. The novelty of the algorithm is that it can tamper location and Self-restoration. The recovery has been very good effect. The first, utilizing the zero-tree structure, the algorithm compresses and encodes the image itself, and then gained self correlative watermark data, so as to greatly reduce the quantity of embedding watermark. Then the watermark data is encoded by error correcting code, and the check bits and watermark bits are scrambled and embedded to enhance the recovery ability. At the same time, by embedding watermark into the latter two bit place of gray level image's bit-plane code, the image after embedded watermark can gain nicer visual effect. The experiment results show that the proposed algorithm may not only detect various processing such as noise adding, cropping, and filtering, but also recover tampered image and realize blind-detection. Peak signal-to-noise ratios of the watermark image were higher than other similar algorithm. The attack capability of the algorithm was enhanced.

  16. Autofocus and fusion using nonlinear correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabazos-Marín, Alma Rocío; Álvarez-Borrego, Josué, E-mail: josue@cicese.mx; Coronel-Beltrán, Ángel

    2014-10-06

    In this work a new algorithm is proposed for auto focusing and images fusion captured by microscope's CCD. The proposed algorithm for auto focusing implements the spiral scanning of each image in the stack f(x, y){sub w} to define the V{sub w} vector. The spectrum of the vector FV{sub w} is calculated by fast Fourier transform. The best in-focus image is determined by a focus measure that is obtained by the FV{sub 1} nonlinear correlation vector, of the reference image, with each other FV{sub W} images in the stack. In addition, fusion is performed with a subset of selected imagesmore » f(x, y){sub SBF} like the images with best focus measurement. Fusion creates a new improved image f(x, y){sub F} with the selection of pixels of higher intensity.« less

  17. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  18. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-07

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.

  19. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.

  20. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  1. Image defog algorithm based on open close filter and gradient domain recursive bilateral filter

    NASA Astrophysics Data System (ADS)

    Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen

    2017-11-01

    To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.

  2. Effect of anisoplanatism on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor

    NASA Astrophysics Data System (ADS)

    Woeger, Friedrich; Rimmele, Thomas

    2009-10-01

    We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.

  3. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm.

    PubMed

    Cui, Xiaoming; Li, Tao; Li, Xin; Zhou, Weihua

    2015-05-01

    The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0±8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Automatic detection and classification of damage zone(s) for incorporating in digital image correlation technique

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudipta; Deb, Debasis

    2016-07-01

    Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.

  5. Correlation coefficient based supervised locally linear embedding for pulmonary nodule recognition.

    PubMed

    Wu, Panpan; Xia, Kewen; Yu, Hengyong

    2016-11-01

    Dimensionality reduction techniques are developed to suppress the negative effects of high dimensional feature space of lung CT images on classification performance in computer aided detection (CAD) systems for pulmonary nodule detection. An improved supervised locally linear embedding (SLLE) algorithm is proposed based on the concept of correlation coefficient. The Spearman's rank correlation coefficient is introduced to adjust the distance metric in the SLLE algorithm to ensure that more suitable neighborhood points could be identified, and thus to enhance the discriminating power of embedded data. The proposed Spearman's rank correlation coefficient based SLLE (SC(2)SLLE) is implemented and validated in our pilot CAD system using a clinical dataset collected from the publicly available lung image database consortium and image database resource initiative (LICD-IDRI). Particularly, a representative CAD system for solitary pulmonary nodule detection is designed and implemented. After a sequential medical image processing steps, 64 nodules and 140 non-nodules are extracted, and 34 representative features are calculated. The SC(2)SLLE, as well as SLLE and LLE algorithm, are applied to reduce the dimensionality. Several quantitative measurements are also used to evaluate and compare the performances. Using a 5-fold cross-validation methodology, the proposed algorithm achieves 87.65% accuracy, 79.23% sensitivity, 91.43% specificity, and 8.57% false positive rate, on average. Experimental results indicate that the proposed algorithm outperforms the original locally linear embedding and SLLE coupled with the support vector machine (SVM) classifier. Based on the preliminary results from a limited number of nodules in our dataset, this study demonstrates the great potential to improve the performance of a CAD system for nodule detection using the proposed SC(2)SLLE. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.

    1984-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295

  7. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to themore » respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor subvolume with R{sub rigid} = − 0.60 (p = 0.01) and R{sub deformable} = − 0.46 (p = 0.01–0.20) averaging over all deformable algorithms. For stable-disease subvolumes, the correlations were significant (p < 0.001) for all registration algorithms with R{sub rigid} = − 0.71 and R{sub deformable} = − 0.72. Progressor and stable-disease subvolumes resulting from rigid registration were in excellent agreement (DSI > 70%) for RMS{sub rigid} < 150 HU. However, tumor deformation was observed to have negligible effect on DSI for responder subvolumes with insignificant |R| < 0.26, p > 0.27. Conclusions: This study demonstrated that deformable algorithms cannot be arbitrarily chosen; different deformable algorithms can result in large differences of voxel-based PET image comparison. For low tumor deformation (RMS{sub rigid} < 150 HU), rigid and deformable algorithms yield similar results, suggesting deformable registration is not required for these cases.« less

  8. Development of a hybrid image processing algorithm for automatic evaluation of intramuscular fat content in beef M. longissimus dorsi.

    PubMed

    Du, Cheng-Jin; Sun, Da-Wen; Jackman, Patrick; Allen, Paul

    2008-12-01

    An automatic method for estimating the content of intramuscular fat (IMF) in beef M. longissimus dorsi (LD) was developed using a sequence of image processing algorithm. To extract IMF particles within the LD muscle from structural features of intermuscular fat surrounding the muscle, three steps of image processing algorithm were developed, i.e. bilateral filter for noise removal, kernel fuzzy c-means clustering (KFCM) for segmentation, and vector confidence connected and flood fill for IMF extraction. The technique of bilateral filtering was firstly applied to reduce the noise and enhance the contrast of the beef image. KFCM was then used to segment the filtered beef image into lean, fat, and background. The IMF was finally extracted from the original beef image by using the techniques of vector confidence connected and flood filling. The performance of the algorithm developed was verified by correlation analysis between the IMF characteristics and the percentage of chemically extractable IMF content (P<0.05). Five IMF features are very significantly correlated with the fat content (P<0.001), including count densities of middle (CDMiddle) and large (CDLarge) fat particles, area densities of middle and large fat particles, and total fat area per unit LD area. The highest coefficient is 0.852 for CDLarge.

  9. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  10. Experimental results for correlation-based wavefront sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyneer, L A; Palmer, D W; LaFortune, K N

    2005-07-01

    Correlation wave-front sensing can improve Adaptive Optics (AO) system performance in two keys areas. For point-source-based AO systems, Correlation is more accurate, more robust to changing conditions and provides lower noise than a centroiding algorithm. Experimental results from the Lick AO system and the SSHCL laser AO system confirm this. For remote imaging, Correlation enables the use of extended objects for wave-front sensing. Results from short horizontal-path experiments will show algorithm properties and requirements.

  11. The correlation study of parallel feature extractor and noise reduction approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dewi, Deshinta Arrova; Sundararajan, Elankovan; Prabuwono, Anton Satria

    2015-05-15

    This paper presents literature reviews that show variety of techniques to develop parallel feature extractor and finding its correlation with noise reduction approaches for low light intensity images. Low light intensity images are normally displayed as darker images and low contrast. Without proper handling techniques, those images regularly become evidences of misperception of objects and textures, the incapability to section them. The visual illusions regularly clues to disorientation, user fatigue, poor detection and classification performance of humans and computer algorithms. Noise reduction approaches (NR) therefore is an essential step for other image processing steps such as edge detection, image segmentation,more » image compression, etc. Parallel Feature Extractor (PFE) meant to capture visual contents of images involves partitioning images into segments, detecting image overlaps if any, and controlling distributed and redistributed segments to extract the features. Working on low light intensity images make the PFE face challenges and closely depend on the quality of its pre-processing steps. Some papers have suggested many well established NR as well as PFE strategies however only few resources have suggested or mentioned the correlation between them. This paper reviews best approaches of the NR and the PFE with detailed explanation on the suggested correlation. This finding may suggest relevant strategies of the PFE development. With the help of knowledge based reasoning, computational approaches and algorithms, we present the correlation study between the NR and the PFE that can be useful for the development and enhancement of other existing PFE.« less

  12. A difference tracking algorithm based on discrete sine transform

    NASA Astrophysics Data System (ADS)

    Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun

    2018-04-01

    Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.

  13. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  14. Introduction of Total Variation Regularization into Filtered Backprojection Algorithm

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.

  15. An adaptive clustering algorithm for image matching based on corner feature

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-04-01

    The traditional image matching algorithm always can not balance the real-time and accuracy better, to solve the problem, an adaptive clustering algorithm for image matching based on corner feature is proposed in this paper. The method is based on the similarity of the matching pairs of vector pairs, and the adaptive clustering is performed on the matching point pairs. Harris corner detection is carried out first, the feature points of the reference image and the perceived image are extracted, and the feature points of the two images are first matched by Normalized Cross Correlation (NCC) function. Then, using the improved algorithm proposed in this paper, the matching results are clustered to reduce the ineffective operation and improve the matching speed and robustness. Finally, the Random Sample Consensus (RANSAC) algorithm is used to match the matching points after clustering. The experimental results show that the proposed algorithm can effectively eliminate the most wrong matching points while the correct matching points are retained, and improve the accuracy of RANSAC matching, reduce the computation load of whole matching process at the same time.

  16. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  17. Evaluation of a deformable registration algorithm for subsequent lung computed tomography imaging during radiochemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stützer, Kristin; Haase, Robert; Exner, Florian

    2016-09-15

    Purpose: Rating both a lung segmentation algorithm and a deformable image registration (DIR) algorithm for subsequent lung computed tomography (CT) images by different evaluation techniques. Furthermore, investigating the relative performance and the correlation of the different evaluation techniques to address their potential value in a clinical setting. Methods: Two to seven subsequent CT images (69 in total) of 15 lung cancer patients were acquired prior, during, and after radiochemotherapy. Automated lung segmentations were compared to manually adapted contours. DIR between the first and all following CT images was performed with a fast algorithm specialized for lung tissue registration, requiring themore » lung segmentation as input. DIR results were evaluated based on landmark distances, lung contour metrics, and vector field inconsistencies in different subvolumes defined by eroding the lung contour. Correlations between the results from the three methods were evaluated. Results: Automated lung contour segmentation was satisfactory in 18 cases (26%), failed in 6 cases (9%), and required manual correction in 45 cases (66%). Initial and corrected contours had large overlap but showed strong local deviations. Landmark-based DIR evaluation revealed high accuracy compared to CT resolution with an average error of 2.9 mm. Contour metrics of deformed contours were largely satisfactory. The median vector length of inconsistency vector fields was 0.9 mm in the lung volume and slightly smaller for the eroded volumes. There was no clear correlation between the three evaluation approaches. Conclusions: Automatic lung segmentation remains challenging but can assist the manual delineation process. Proven by three techniques, the inspected DIR algorithm delivers reliable results for the lung CT data sets acquired at different time points. Clinical application of DIR demands a fast DIR evaluation to identify unacceptable results, for instance, by combining different automated DIR evaluation methods.« less

  18. Comparison of distribution of lung aeration measured with EIT and CT in spontaneously breathing, awake patients1.

    PubMed

    Radke, Oliver C; Schneider, Thomas; Braune, Anja; Pirracchio, Romain; Fischer, Felix; Koch, Thea

    2016-09-28

    Both Electrical Impedance Tomography (EIT) and Computed Tomography (CT) allow the estimation of the lung area. We compared two algorithms for the detection of the lung area per quadrant from the EIT images with the lung areas derived from the CT images. 39 outpatients who were scheduled for an elective CT scan of the thorax were included in the study. For each patient we recorded EIT images immediately before the CT scan. The lung area per quadrant was estimated from both CT and EIT data using two different algorithms for the EIT data. Data showed considerable variation during spontaneous breathing of the patients. Overall correlation between EIT and CT was poor (0.58-0.77), the correlation between the two EIT algorithms was better (0.90-0.92). Bland-Altmann analysis revealed absence of bias, but wide limits of agreement. Lung area estimation from CT and EIT differs significantly, most probably because of the fundamental difference in image generation.

  19. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  20. Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.

    2012-08-01

    A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.

  1. Improved Resolution and Reduced Clutter in Ultra-Wideband Microwave Imaging Using Cross-Correlated Back Projection: Experimental and Numerical Results

    PubMed Central

    Jacobsen, S.; Birkelund, Y.

    2010-01-01

    Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40–50%. PMID:21331362

  2. Improved resolution and reduced clutter in ultra-wideband microwave imaging using cross-correlated back projection: experimental and numerical results.

    PubMed

    Jacobsen, S; Birkelund, Y

    2010-01-01

    Microwave breast cancer detection is based on the dielectric contrast between healthy and malignant tissue. This radar-based imaging method involves illumination of the breast with an ultra-wideband pulse. Detection of tumors within the breast is achieved by some selected focusing technique. Image formation algorithms are tailored to enhance tumor responses and reduce early-time and late-time clutter associated with skin reflections and heterogeneity of breast tissue. In this contribution, we evaluate the performance of the so-called cross-correlated back projection imaging scheme by using a scanning system in phantom experiments. Supplementary numerical modeling based on commercial software is also presented. The phantom is synthetically scanned with a broadband elliptical antenna in a mono-static configuration. The respective signals are pre-processed by a data-adaptive RLS algorithm in order to remove artifacts caused by antenna reverberations and signal clutter. Successful detection of a 7 mm diameter cylindrical tumor immersed in a low permittivity medium was achieved in all cases. Selecting the widely used delay-and-sum (DAS) beamforming algorithm as a benchmark, we show that correlation based imaging methods improve the signal-to-clutter ratio by at least 10 dB and improves spatial resolution through a reduction of the imaged peak full-width half maximum (FWHM) of about 40-50%.

  3. [Application of elastic registration based on Demons algorithm in cone beam CT].

    PubMed

    Pang, Haowen; Sun, Xiaoyang

    2014-02-01

    We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.

  4. A fast global fitting algorithm for fluorescence lifetime imaging microscopy based on image segmentation.

    PubMed

    Pelet, S; Previte, M J R; Laiho, L H; So, P T C

    2004-10-01

    Global fitting algorithms have been shown to improve effectively the accuracy and precision of the analysis of fluorescence lifetime imaging microscopy data. Global analysis performs better than unconstrained data fitting when prior information exists, such as the spatial invariance of the lifetimes of individual fluorescent species. The highly coupled nature of global analysis often results in a significantly slower convergence of the data fitting algorithm as compared with unconstrained analysis. Convergence speed can be greatly accelerated by providing appropriate initial guesses. Realizing that the image morphology often correlates with fluorophore distribution, a global fitting algorithm has been developed to assign initial guesses throughout an image based on a segmentation analysis. This algorithm was tested on both simulated data sets and time-domain lifetime measurements. We have successfully measured fluorophore distribution in fibroblasts stained with Hoechst and calcein. This method further allows second harmonic generation from collagen and elastin autofluorescence to be differentiated in fluorescence lifetime imaging microscopy images of ex vivo human skin. On our experimental measurement, this algorithm increased convergence speed by over two orders of magnitude and achieved significantly better fits. Copyright 2004 Biophysical Society

  5. Block-Based Connected-Component Labeling Algorithm Using Binary Decision Trees

    PubMed Central

    Chang, Wan-Yu; Chiu, Chung-Cheng; Yang, Jia-Horng

    2015-01-01

    In this paper, we propose a fast labeling algorithm based on block-based concepts. Because the number of memory access points directly affects the time consumption of the labeling algorithms, the aim of the proposed algorithm is to minimize neighborhood operations. Our algorithm utilizes a block-based view and correlates a raster scan to select the necessary pixels generated by a block-based scan mask. We analyze the advantages of a sequential raster scan for the block-based scan mask, and integrate the block-connected relationships using two different procedures with binary decision trees to reduce unnecessary memory access. This greatly simplifies the pixel locations of the block-based scan mask. Furthermore, our algorithm significantly reduces the number of leaf nodes and depth levels required in the binary decision tree. We analyze the labeling performance of the proposed algorithm alongside that of other labeling algorithms using high-resolution images and foreground images. The experimental results from synthetic and real image datasets demonstrate that the proposed algorithm is faster than other methods. PMID:26393597

  6. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  7. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  8. Developing and evaluating a target-background similarity metric for camouflage detection.

    PubMed

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.

  9. Statistical image reconstruction from correlated data with applications to PET

    PubMed Central

    Alessio, Adam; Sauer, Ken; Kinahan, Paul

    2008-01-01

    Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576

  10. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  11. Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.

    PubMed

    Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N

    2008-10-10

    The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America

  12. A novel iris localization algorithm using correlation filtering

    NASA Astrophysics Data System (ADS)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  13. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  15. The effect of 18F-FDG-PET image reconstruction algorithms on the expression of characteristic metabolic brain network in Parkinson's disease.

    PubMed

    Tomše, Petra; Jensterle, Luka; Rep, Sebastijan; Grmek, Marko; Zaletel, Katja; Eidelberg, David; Dhawan, Vijay; Ma, Yilong; Trošt, Maja

    2017-09-01

    To evaluate the reproducibility of the expression of Parkinson's Disease Related Pattern (PDRP) across multiple sets of 18F-FDG-PET brain images reconstructed with different reconstruction algorithms. 18F-FDG-PET brain imaging was performed in two independent cohorts of Parkinson's disease (PD) patients and normal controls (NC). Slovenian cohort (20 PD patients, 20 NC) was scanned with Siemens Biograph mCT camera and reconstructed using FBP, FBP+TOF, OSEM, OSEM+TOF, OSEM+PSF and OSEM+PSF+TOF. American Cohort (20 PD patients, 7 NC) was scanned with GE Advance camera and reconstructed using 3DRP, FORE-FBP and FORE-Iterative. Expressions of two previously-validated PDRP patterns (PDRP-Slovenia and PDRP-USA) were calculated. We compared the ability of PDRP to discriminate PD patients from NC, differences and correlation between the corresponding subject scores and ROC analysis results across the different reconstruction algorithms. The expression of PDRP-Slovenia and PDRP-USA networks was significantly elevated in PD patients compared to NC (p<0.0001), regardless of reconstruction algorithms. PDRP expression strongly correlated between all studied algorithms and the reference algorithm (r⩾0.993, p<0.0001). Average differences in the PDRP expression among different algorithms varied within 0.73 and 0.08 of the reference value for PDRP-Slovenia and PDRP-USA, respectively. ROC analysis confirmed high similarity in sensitivity, specificity and AUC among all studied reconstruction algorithms. These results show that the expression of PDRP is reproducible across a variety of reconstruction algorithms of 18F-FDG-PET brain images. PDRP is capable of providing a robust metabolic biomarker of PD for multicenter 18F-FDG-PET images acquired in the context of differential diagnosis or clinical trials. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Image processing via VLSI: A concept paper

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1982-01-01

    Implementing specific image processing algorithms via very large scale integrated systems offers a potent solution to the problem of handling high data rates. Two algorithms stand out as being particularly critical -- geometric map transformation and filtering or correlation. These two functions form the basis for data calibration, registration and mosaicking. VLSI presents itself as an inexpensive ancillary function to be added to almost any general purpose computer and if the geometry and filter algorithms are implemented in VLSI, the processing rate bottleneck would be significantly relieved. A set of image processing functions that limit present systems to deal with future throughput needs, translates these functions to algorithms, implements via VLSI technology and interfaces the hardware to a general purpose digital computer is developed.

  17. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  18. TU-H-206-01: An Automated Approach for Identifying Geometric Distortions in Gamma Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, S; Nelson, J; Samei, E

    2016-06-15

    Purpose: To develop a clinically-deployable, automated process for detecting artifacts in routine nuclear medicine (NM) quality assurance (QA) bar phantom images. Methods: An artifact detection algorithm was created to analyze bar phantom images as part of an ongoing QA program. A low noise, high resolution reference image was acquired from an x-ray of the bar phantom with a Philips Digital Diagnost system utilizing image stitching. NM bar images, acquired for 5 million counts over a 512×512 matrix, were registered to the template image by maximizing mutual information (MI). The MI index was used as an initial test for artifacts; lowmore » values indicate an overall presence of distortions regardless of their spatial location. Images with low MI scores were further analyzed for bar linearity, periodicity, alignment, and compression to locate differences with respect to the template. Findings from each test were spatially correlated and locations failing multiple tests were flagged as potential artifacts requiring additional visual analysis. The algorithm was initially deployed for GE Discovery 670 and Infinia Hawkeye gamma cameras. Results: The algorithm successfully identified clinically relevant artifacts from both systems previously unnoticed by technologists performing the QA. Average MI indices for artifact-free images are 0.55. Images with MI indices < 0.50 have shown 100% sensitivity and specificity for artifact detection when compared with a thorough visual analysis. Correlation of geometric tests confirms the ability to spatially locate the most likely image regions containing an artifact regardless of initial phantom orientation. Conclusion: The algorithm shows the potential to detect gamma camera artifacts that may be missed by routine technologist inspections. Detection and subsequent correction of artifacts ensures maximum image quality and may help to identify failing hardware before it impacts clinical workflow. Going forward, the algorithm is being deployed to monitor data from all gamma cameras within our health system.« less

  19. Alpha trimmed correlation for touchless finger image mosaicing

    NASA Astrophysics Data System (ADS)

    Rao, Shishir P.; Rajendran, Rahul; Agaian, Sos S.; Mulawka, Marzena Mary Ann

    2016-05-01

    In this paper, a novel technique to mosaic multiview contactless finger images is presented. This technique makes use of different correlation methods, such as, the Alpha-trimmed correlation, Pearson's correlation [1], Kendall's correlation [2], and Spearman's correlation [2], to combine multiple views of the finger. The key contributions of the algorithm are: 1) stitches images more accurately, 2) provides better image fusion effects, 3) has better visual effect on the overall image, and 4) is more reliable. The extensive computer simulations show that the proposed method produces better or comparable stitched images than several state-of-the-art methods, such as those presented by Feng Liu [3], K Choi [4], H Choi [5], and G Parziale [6]. In addition, we also compare various correlation techniques with the correlation method mentioned in [3] and analyze the output. In the future, this method can be extended to obtain a 3D model of the finger using multiple views of the finger, and help in generating scenic panoramic images and underwater 360-degree panoramas.

  20. Visible Light Image-Based Method for Sugar Content Classification of Citrus

    PubMed Central

    Wang, Xuefeng; Wu, Chunyan; Hirafuji, Masayuki

    2016-01-01

    Visible light imaging of citrus fruit from Mie Prefecture of Japan was performed to determine whether an algorithm could be developed to predict the sugar content. This nondestructive classification showed that the accurate segmentation of different images can be realized by a correlation analysis based on the threshold value of the coefficient of determination. There is an obvious correlation between the sugar content of citrus fruit and certain parameters of the color images. The selected image parameters were connected by addition algorithm. The sugar content of citrus fruit can be predicted by the dummy variable method. The results showed that the small but orange citrus fruits often have a high sugar content. The study shows that it is possible to predict the sugar content of citrus fruit and to perform a classification of the sugar content using light in the visible spectrum and without the need for an additional light source. PMID:26811935

  1. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  2. Real-time micro-vibration multi-spot synchronous measurement within a region based on heterodyne interference

    NASA Astrophysics Data System (ADS)

    Lan, Ma; Xiao, Wen; Chen, Zonghui; Hao, Hongliang; Pan, Feng

    2018-01-01

    Real-time micro-vibration measurement is widely used in engineering applications. It is very difficult for traditional optical detection methods to achieve real-time need in a relatively high frequency and multi-spot synchronous measurement of a region at the same time,especially at the nanoscale. Based on the method of heterodyne interference, an experimental system of real-time measurement of micro - vibration is constructed to satisfy the demand in engineering applications. The vibration response signal is measured by combing optical heterodyne interferometry and a high-speed CMOS-DVR image acquisition system. Then, by extracting and processing multiple pixels at the same time, four digital demodulation technique are implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. Different kinds of demodulation algorithms are analyzed and the results show that these four demodulation algorithms are suitable for different interference signals. Both autocorrelation algorithm and cross-correlation algorithm meet the needs of real-time measurements. The autocorrelation algorithm demodulates the frequency more accurately, while the cross-correlation algorithm is more accurate in solving the amplitude.

  3. Analysis of Orientations of Collagen Fibers by Novel Fiber-Tracking Software

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Rajwa, Bartlomiej; Filmer, David L.; Hoffmann, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennie; Robinson, J. Paul

    2003-12-01

    Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to not only its composition but also its structure. This article integrates confocal microscopy imaging and image-processing techniques to analyze the microstructural properties of ECM. This report describes a two- and three-dimensional fiber middle-line tracing algorithm that may be used to quantify collagen fibril organization. We utilized computer simulation and statistical analysis to validate the developed algorithm. These algorithms were applied to confocal images of collagen gels made with reconstituted bovine collagen type I, to demonstrate the computation of orientations of individual fibers.

  4. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  5. DAVIS: A direct algorithm for velocity-map imaging system

    NASA Astrophysics Data System (ADS)

    Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.

    2018-05-01

    In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.

  6. In-vivo study of blood flow in capillaries using μPIV method

    NASA Astrophysics Data System (ADS)

    Kurochkin, Maxim A.; Fedosov, Ivan V.; Tuchin, Valery V.

    2014-01-01

    A digital optical system for intravital capillaroscopy has been developed. It implements the particle image velocimetry (PIV) based approach for measurements of red blood cells velocity in individual capillary of human nailfold. We propose to use a digital real time stabilization technique for compensation of impact of involuntary movements of a finger on results of measurements. Image stabilization algorithm is based on correlation of feature tracking. The efficiency of designed image stabilization algorithm was experimentally demonstrated.

  7. Detection of illicit substances in fingerprints by infrared spectral imaging.

    PubMed

    Ng, Ping Hei Ronnie; Walker, Sarah; Tahtouh, Mark; Reedy, Brian

    2009-08-01

    FTIR and Raman spectral imaging can be used to simultaneously image a latent fingerprint and detect exogenous substances deposited within it. These substances might include drugs of abuse or traces of explosives or gunshot residue. In this work, spectral searching algorithms were tested for their efficacy in finding targeted substances deposited within fingerprints. "Reverse" library searching, where a large number of possibly poor-quality spectra from a spectral image are searched against a small number of high-quality reference spectra, poses problems for common search algorithms as they are usually implemented. Out of a range of algorithms which included conventional Euclidean distance searching, the spectral angle mapper (SAM) and correlation algorithms gave the best results when used with second-derivative image and reference spectra. All methods tested gave poorer performances with first derivative and undifferentiated spectra. In a search against a caffeine reference, the SAM and correlation methods were able to correctly rank a set of 40 confirmed but poor-quality caffeine spectra at the top of a dataset which also contained 4,096 spectra from an image of an uncontaminated latent fingerprint. These methods also successfully and individually detected aspirin, diazepam and caffeine that had been deposited together in another fingerprint, and they did not indicate any of these substances as a match in a search for another substance which was known not to be present. The SAM was used to successfully locate explosive components in fingerprints deposited on silicon windows. The potential of other spectral searching algorithms used in the field of remote sensing is considered, and the applicability of the methods tested in this work to other modes of spectral imaging is discussed.

  8. OPTICAL correlation identification technology applied in underwater laser imaging target identification

    NASA Astrophysics Data System (ADS)

    Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long

    2012-01-01

    The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.

  9. An algorithm for spatial heirarchy clustering

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Velasco, F. R. D.

    1981-01-01

    A method for utilizing both spectral and spatial redundancy in compacting and preclassifying images is presented. In multispectral satellite images, a high correlation exists between neighboring image points which tend to occupy dense and restricted regions of the feature space. The image is divided into windows of the same size where the clustering is made. The classes obtained in several neighboring windows are clustered, and then again successively clustered until only one region corresponding to the whole image is obtained. By employing this algorithm only a few points are considered in each clustering, thus reducing computational effort. The method is illustrated as applied to LANDSAT images.

  10. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  11. Algorithm guided outlining of 105 pancreatic cancer liver metastases in Ultrasound.

    PubMed

    Hann, Alexander; Bettac, Lucas; Haenle, Mark M; Graeter, Tilmann; Berger, Andreas W; Dreyhaupt, Jens; Schmalstieg, Dieter; Zoller, Wolfram G; Egger, Jan

    2017-10-06

    Manual segmentation of hepatic metastases in ultrasound images acquired from patients suffering from pancreatic cancer is common practice. Semiautomatic measurements promising assistance in this process are often assessed using a small number of lesions performed by examiners who already know the algorithm. In this work, we present the application of an algorithm for the segmentation of liver metastases due to pancreatic cancer using a set of 105 different images of metastases. The algorithm and the two examiners had never assessed the images before. The examiners first performed a manual segmentation and, after five weeks, a semiautomatic segmentation using the algorithm. They were satisfied in up to 90% of the cases with the semiautomatic segmentation results. Using the algorithm was significantly faster and resulted in a median Dice similarity score of over 80%. Estimation of the inter-operator variability by using the intra class correlation coefficient was good with 0.8. In conclusion, the algorithm facilitates fast and accurate segmentation of liver metastases, comparable to the current gold standard of manual segmentation.

  12. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Estimation of glacier surface motion by robust phase correlation and point like features of SAR intensity images

    NASA Astrophysics Data System (ADS)

    Fang, Li; Xu, Yusheng; Yao, Wei; Stilla, Uwe

    2016-11-01

    For monitoring of glacier surface motion in pole and alpine areas, radar remote sensing is becoming a popular technology accounting for its specific advantages of being independent of weather conditions and sunlight. In this paper we propose a method for glacier surface motion monitoring using phase correlation (PC) based on point-like features (PLF). We carry out experiments using repeat-pass TerraSAR X-band (TSX) and Sentinel-1 C-band (S1C) intensity images of the Taku glacier in Juneau icefield located in southeast Alaska. The intensity imagery is first filtered by an improved adaptive refined Lee filter while the effect of topographic reliefs is removed via SRTM-X DEM. Then, a robust phase correlation algorithm based on singular value decomposition (SVD) and an improved random sample consensus (RANSAC) algorithm is applied to sequential PLF pairs generated by correlation using a 2D sinc function template. The approaches for glacier monitoring are validated by both simulated SAR data and real SAR data from two satellites. The results obtained from these three test datasets confirm the superiority of the proposed approach compared to standard correlation-like methods. By the use of the proposed adaptive refined Lee filter, we achieve a good balance between the suppression of noise and the preservation of local image textures. The presented phase correlation algorithm shows the accuracy of better than 0.25 pixels, when conducting matching tests using simulated SAR intensity images with strong noise. Quantitative 3D motions and velocities of the investigated Taku glacier during a repeat-pass period are obtained, which allows a comprehensive and reliable analysis for the investigation of large-scale glacier surface dynamics.

  14. Aircraft target detection algorithm based on high resolution spaceborne SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing

    2018-03-01

    In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.

  15. Forming positive-negative images using conditioned partial measurements from reference arm in ghost imaging.

    PubMed

    Wen, Jianming

    2012-09-01

    A recent thermal ghost imaging experiment implemented in Wu's group [Chin. Phys. Lett. 279, 074216 (2012)] showed that both positive and negative images can be constructed by applying a novel algorithm. This algorithm allows us to form the images with the use of partial measurements from the reference arm (even which never passes through the object), conditioned on the object arm. In this paper, we present a simple theory that explains the experimental observation and provides an in-depth understanding of conventional ghost imaging. In particular, we theoretically show that the visibility of formed images through such an algorithm is not bounded by the standard value 1/3. In fact, it can ideally grow up to unity (with reduced imaging quality). Thus, the algorithm described here not only offers an alternative way to decode spatial correlation of thermal light, but also mimics a "bandpass filter" to remove the constant background such that the visibility or imaging contrast is improved. We further show that conditioned on one still object present in the test arm, it is possible to construct the object's image by sampling the available reference data.

  16. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  17. Estimation of saturated pixel values in digital color imaging

    PubMed Central

    Zhang, Xuemei; Brainard, David H.

    2007-01-01

    Pixel saturation, where the incident light at a pixel causes one of the color channels of the camera sensor to respond at its maximum value, can produce undesirable artifacts in digital color images. We present a Bayesian algorithm that estimates what the saturated channel's value would have been in the absence of saturation. The algorithm uses the non-saturated responses from the other color channels, together with a multivariate Normal prior that captures the correlation in response across color channels. The appropriate parameters for the prior may be estimated directly from the image data, since most image pixels are not saturated. Given the prior, the responses of the non-saturated channels, and the fact that the true response of the saturated channel is known to be greater than the saturation level, the algorithm returns the optimal expected mean square estimate for the true response. Extensions of the algorithm to the case where more than one channel is saturated are also discussed. Both simulations and examples with real images are presented to show that the algorithm is effective. PMID:15603065

  18. Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection

    PubMed Central

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Background Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. Methodology In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. Significance The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. PMID:24498310

  19. An effective and efficient compression algorithm for ECG signals with irregular periods.

    PubMed

    Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son

    2006-06-01

    This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.

  20. Self-recovery reversible image watermarking algorithm

    PubMed Central

    Sun, He; Gao, Shangbing; Jin, Shenghua

    2018-01-01

    The integrity of image content is essential, although most watermarking algorithms can achieve image authentication but not automatically repair damaged areas or restore the original image. In this paper, a self-recovery reversible image watermarking algorithm is proposed to recover the tampered areas effectively. First of all, the original image is divided into homogeneous blocks and non-homogeneous blocks through multi-scale decomposition, and the feature information of each block is calculated as the recovery watermark. Then, the original image is divided into 4×4 non-overlapping blocks classified into smooth blocks and texture blocks according to image textures. Finally, the recovery watermark generated by homogeneous blocks and error-correcting codes is embedded into the corresponding smooth block by mapping; watermark information generated by non-homogeneous blocks and error-correcting codes is embedded into the corresponding non-embedded smooth block and the texture block via mapping. The correlation attack is detected by invariant moments when the watermarked image is attacked. To determine whether a sub-block has been tampered with, its feature is calculated and the recovery watermark is extracted from the corresponding block. If the image has been tampered with, it can be recovered. The experimental results show that the proposed algorithm can effectively recover the tampered areas with high accuracy and high quality. The algorithm is characterized by sound visual quality and excellent image restoration. PMID:29920528

  1. A 3D reconstruction algorithm for magneto-acoustic tomography with magnetic induction based on ultrasound transducer characteristics.

    PubMed

    Ma, Ren; Zhou, Xiaoqing; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-12-21

    In this study we present a three-dimensional (3D) reconstruction algorithm for magneto-acoustic tomography with magnetic induction (MAT-MI) based on the characteristics of the ultrasound transducer. The algorithm is investigated to solve the blur problem of the MAT-MI acoustic source image, which is caused by the ultrasound transducer and the scanning geometry. First, we established a transducer model matrix using measured data from the real transducer. With reference to the S-L model used in the computed tomography algorithm, a 3D phantom model of electrical conductivity is set up. Both sphere scanning and cylinder scanning geometries are adopted in the computer simulation. Then, using finite element analysis, the distribution of the eddy current and the acoustic source as well as the acoustic pressure can be obtained with the transducer model matrix. Next, using singular value decomposition, the inverse transducer model matrix together with the reconstruction algorithm are worked out. The acoustic source and the conductivity images are reconstructed using the proposed algorithm. Comparisons between an ideal point transducer and the realistic transducer are made to evaluate the algorithms. Finally, an experiment is performed using a graphite phantom. We found that images of the acoustic source reconstructed using the proposed algorithm are a better match than those using the previous one, the correlation coefficient of sphere scanning geometry is 98.49% and that of cylinder scanning geometry is 94.96%. Comparison between the ideal point transducer and the realistic transducer shows that the correlation coefficients are 90.2% in sphere scanning geometry and 86.35% in cylinder scanning geometry. The reconstruction of the graphite phantom experiment also shows a higher resolution using the proposed algorithm. We conclude that the proposed reconstruction algorithm, which considers the characteristics of the transducer, can obviously improve the resolution of the reconstructed image. This study can be applied to analyse the effect of the position of the transducer and the scanning geometry on imaging. It may provide a more precise method to reconstruct the conductivity distribution in MAT-MI.

  2. Reducing 4D CT artifacts using optimized sorting based on anatomic similarity.

    PubMed

    Johnston, Eric; Diehn, Maximilian; Murphy, James D; Loo, Billy W; Maxim, Peter G

    2011-05-01

    Four-dimensional (4D) computed tomography (CT) has been widely used as a tool to characterize respiratory motion in radiotherapy. The two most commonly used 4D CT algorithms sort images by the associated respiratory phase or displacement into a predefined number of bins, and are prone to image artifacts at transitions between bed positions. The purpose of this work is to demonstrate a method of reducing motion artifacts in 4D CT by incorporating anatomic similarity into phase or displacement based sorting protocols. Ten patient datasets were retrospectively sorted using both the displacement and phase based sorting algorithms. Conventional sorting methods allow selection of only the nearest-neighbor image in time or displacement within each bin. In our method, for each bed position either the displacement or the phase defines the center of a bin range about which several candidate images are selected. The two dimensional correlation coefficients between slices bordering the interface between adjacent couch positions are then calculated for all candidate pairings. Two slices have a high correlation if they are anatomically similar. Candidates from each bin are then selected to maximize the slice correlation over the entire data set using the Dijkstra's shortest path algorithm. To assess the reduction of artifacts, two thoracic radiation oncologists independently compared the resorted 4D datasets pairwise with conventionally sorted datasets, blinded to the sorting method, to choose which had the least motion artifacts. Agreement between reviewers was evaluated using the weighted kappa score. Anatomically based image selection resulted in 4D CT datasets with significantly reduced motion artifacts with both displacement (P = 0.0063) and phase sorting (P = 0.00022). There was good agreement between the two reviewers, with complete agreement 34 times and complete disagreement 6 times. Optimized sorting using anatomic similarity significantly reduces 4D CT motion artifacts compared to conventional phase or displacement based sorting. This improved sorting algorithm is a straightforward extension of the two most common 4D CT sorting algorithms.

  3. Lossless compression algorithm for multispectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth

    2008-08-01

    Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.

  4. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging.

    PubMed

    Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina

    2016-04-01

    To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.

  5. High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.

    PubMed

    Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo

    2013-11-26

    In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.

  6. Incoherent dictionary learning for reducing crosstalk noise in least-squares reverse time migration

    NASA Astrophysics Data System (ADS)

    Wu, Juan; Bai, Min

    2018-05-01

    We propose to apply a novel incoherent dictionary learning (IDL) algorithm for regularizing the least-squares inversion in seismic imaging. The IDL is proposed to overcome the drawback of traditional dictionary learning algorithm in losing partial texture information. Firstly, the noisy image is divided into overlapped image patches, and some random patches are extracted for dictionary learning. Then, we apply the IDL technology to minimize the coherency between atoms during dictionary learning. Finally, the sparse representation problem is solved by a sparse coding algorithm, and image is restored by those sparse coefficients. By reducing the correlation among atoms, it is possible to preserve most of the small-scale features in the image while removing much of the long-wavelength noise. The application of the IDL method to regularization of seismic images from least-squares reverse time migration shows successful performance.

  7. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  8. Fully automated, deep learning segmentation of oxygen-induced retinopathy images

    PubMed Central

    Xiao, Sa; Bucher, Felicitas; Wu, Yue; Rokem, Ariel; Lee, Cecilia S.; Marra, Kyle V.; Fallon, Regis; Diaz-Aguilar, Sophia; Aguilar, Edith; Friedlander, Martin; Lee, Aaron Y.

    2017-01-01

    Oxygen-induced retinopathy (OIR) is a widely used model to study ischemia-driven neovascularization (NV) in the retina and to serve in proof-of-concept studies in evaluating antiangiogenic drugs for ocular, as well as nonocular, diseases. The primary parameters that are analyzed in this mouse model include the percentage of retina with vaso-obliteration (VO) and NV areas. However, quantification of these two key variables comes with a great challenge due to the requirement of human experts to read the images. Human readers are costly, time-consuming, and subject to bias. Using recent advances in machine learning and computer vision, we trained deep learning neural networks using over a thousand segmentations to fully automate segmentation in OIR images. While determining the percentage area of VO, our algorithm achieved a similar range of correlation coefficients to that of expert inter-human correlation coefficients. In addition, our algorithm achieved a higher range of correlation coefficients compared with inter-expert correlation coefficients for quantification of the percentage area of neovascular tufts. In summary, we have created an open-source, fully automated pipeline for the quantification of key values of OIR images using deep learning neural networks. PMID:29263301

  9. Partial removal of correlated noise in thermal imagery

    NASA Astrophysics Data System (ADS)

    Borel, Christoph C.; Cooke, Bradly J.; Laubscher, Bryan E.

    1996-05-01

    Correlated noise occurs in many imaging systems such as scanners and push-broom imagers. The sources of correlated noise can be from the detectors, pre-amplifiers and sampling circuits. Correlated noise appears as streaking along the scan direction of a scanner or in the along track direction of a push-broom imager. We have developed algorithms to simulate correlated noise and pre-filter to reduce the amount of streaking while not destroying the scene content. The pre-filter in the Fourier domain consists of the product of two filters. One filter models the correlated noise spectrum, the other is a windowing function, e.g. Gaussian or Hanning window with variable width to block high frequency noise away from the origin of the Fourier Transform of the image data. We have optimized the filter parameters for various scenes and find improvements of the RMS error of the original minus the pre-filtered noisy image.

  10. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  11. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  12. Effectiveness of a Staged US and Unenhanced MR Imaging Algorithm in the Diagnosis of Pediatric Appendicitis.

    PubMed

    Dibble, Elizabeth H; Swenson, David W; Cartagena, Claudia; Baird, Grayson L; Herliczek, Thaddeus W

    2018-03-01

    Purpose To establish, in a large cohort, the diagnostic performance of a staged algorithm involving ultrasonography (US) followed by conditional unenhanced magnetic resonance (MR) imaging for the imaging work-up of pediatric appendicitis. Materials and Methods A staged imaging algorithm in which US and unenhanced MR imaging were performed in pediatric patients suspected of having appendicitis was implemented at the authors' institution on January 1, 2011, with US as the initial modality followed by unenhanced MR imaging when US findings were equivocal. A search of the radiology database revealed 2180 pediatric patients who had undergone imaging for suspected appendicitis from January 1, 2011, through December 31, 2012. Of the 2180 patients, 1982 (90.9%) were evaluated according to the algorithm. The authors reviewed the electronic medical records and imaging reports for all patients. Imaging reports were reviewed and classified as positive, negative, or equivocal for appendicitis and correlated with surgical and pathology reports. Results The frequency of appendicitis was 20.5% (407 of 1982 patients). US alone was performed in 1905 of the 1982 patients (96.1%), yielding a sensitivity of 98.7% (386 of 391 patients) and specificity of 97.1% (1470 of 1514 patients) for appendicitis. Seventy-seven patients underwent unenhanced MR imaging after equivocal US findings, yielding an overall algorithm sensitivity of 98.2% (400 of 407 patients) and specificity of 97.1% (1530 of 1575 patients). Seven of the 1982 patients (0.4%) had false-negative results with the staged algorithm. The negative predictive value of the staged algorithm was 99.5% (1530 of 1537 patients). Conclusion A staged algorithm of US and unenhanced MR imaging for pediatric appendicitis appears to be effective. The results of this study demonstrate that this staged algorithm is 98.2% sensitive and 97.1% specific for the diagnosis of appendicitis in pediatric patients. © RSNA, 2017.

  13. A Double Perturbation Method for Reducing Dynamical Degradation of the Digital Baker Map

    NASA Astrophysics Data System (ADS)

    Liu, Lingfeng; Lin, Jun; Miao, Suoxia; Liu, Bocheng

    2017-06-01

    The digital Baker map is widely used in different kinds of cryptosystems, especially for image encryption. However, any chaotic map which is realized on the finite precision device (e.g. computer) will suffer from dynamical degradation, which refers to short cycle lengths, low complexity and strong correlations. In this paper, a novel double perturbation method is proposed for reducing the dynamical degradation of the digital Baker map. Both state variables and system parameters are perturbed by the digital logistic map. Numerical experiments show that the perturbed Baker map can achieve good statistical and cryptographic properties. Furthermore, a new image encryption algorithm is provided as a simple application. With a rather simple algorithm, the encrypted image can achieve high security, which is competitive to the recently proposed image encryption algorithms.

  14. Improved space object detection using short-exposure image data with daylight background.

    PubMed

    Becker, David; Cain, Stephen

    2018-05-10

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. The detection algorithms employed play a crucial role in fulfilling the detection component in the space situational awareness mission to detect, track, characterize, and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator on long-exposure data to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follow a Gaussian distribution. Long-exposure imaging is critical to detection performance in these algorithms; however, for imaging under daylight conditions, it becomes necessary to create a long-exposure image as the sum of many short-exposure images. This paper explores the potential for increasing detection capabilities for small and dim space objects in a stack of short-exposure images dominated by a bright background. The algorithm proposed in this paper improves the traditional stack and average method of forming a long-exposure image by selectively removing short-exposure frames of data that do not positively contribute to the overall signal-to-noise ratio of the averaged image. The performance of the algorithm is compared to a traditional matched filter detector using data generated in MATLAB as well as laboratory-collected data. The results are illustrated on a receiver operating characteristic curve to highlight the increased probability of detection associated with the proposed algorithm.

  15. Image registration under translation and rotation in two-dimensional planes using Fourier slice theorem.

    PubMed

    Pohit, M; Sharma, J

    2015-05-10

    Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.

  16. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  17. Displacement measurement with nanoscale resolution using a coded micro-mark and digital image correlation

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Ma, Chengfu; Chen, Yuhang

    2014-12-01

    A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.

  18. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  19. Development of advanced acreage estimation methods

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1980-01-01

    The use of the AMOEBA clustering/classification algorithm was investigated as a basis for both a color display generation technique and maximum likelihood proportion estimation procedure. An approach to analyzing large data reduction systems was formulated and an exploratory empirical study of spatial correlation in LANDSAT data was also carried out. Topics addressed include: (1) development of multiimage color images; (2) spectral spatial classification algorithm development; (3) spatial correlation studies; and (4) evaluation of data systems.

  20. Sequential Dictionary Learning From Correlated Data: Application to fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-03-22

    Sequential dictionary learning via the K-SVD algorithm has been revealed as a successful alternative to conventional data driven methods such as independent component analysis (ICA) for functional magnetic resonance imaging (fMRI) data analysis. fMRI datasets are however structured data matrices with notions of spatio-temporal correlation and temporal smoothness. This prior information has not been included in the K-SVD algorithm when applied to fMRI data analysis. In this paper we propose three variants of the K-SVD algorithm dedicated to fMRI data analysis by accounting for this prior information. The proposed algorithms differ from the K-SVD in their sparse coding and dictionary update stages. The first two algorithms account for the known correlation structure in the fMRI data by using the squared Q, R-norm instead of the Frobenius norm for matrix approximation. The third and last algorithm account for both the known correlation structure in the fMRI data and the temporal smoothness. The temporal smoothness is incorporated in the dictionary update stage via regularization of the dictionary atoms obtained with penalization. The performance of the proposed dictionary learning algorithms are illustrated through simulations and applications on real fMRI data.

  1. Retinal vessel enhancement based on the Gaussian function and image fusion

    NASA Astrophysics Data System (ADS)

    Moraru, Luminita; Obreja, Cristian Dragoş

    2017-01-01

    The Gaussian function is essential in the construction of the Frangi and COSFIRE (combination of shifted filter responses) filters. The connection of the broken vessels and an accurate extraction of the vascular structure is the main goal of this study. Thus, the outcome of the Frangi and COSFIRE edge detection algorithms are fused using the Dempster-Shafer algorithm with the aim to improve detection and to enhance the retinal vascular structure. For objective results, the average diameters of the retinal vessels provided by Frangi, COSFIRE and Dempster-Shafer fusion algorithms are measured. These experimental values are compared to the ground truth values provided by manually segmented retinal images. We prove the superiority of the fusion algorithm in terms of image quality by using the figure of merit objective metric that correlates the effects of all post-processing techniques.

  2. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  3. Quantitative assessment of tumor angiogenesis using real-time motion-compensated contrast-enhanced ultrasound imaging

    PubMed Central

    Pysz, Marybeth A.; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K.

    2015-01-01

    Purpose To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. Materials and methods The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. Results MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Conclusion Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts. PMID:22535383

  4. Quantitative assessment of tumor angiogenesis using real-time motion-compensated contrast-enhanced ultrasound imaging.

    PubMed

    Pysz, Marybeth A; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K

    2012-09-01

    To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts.

  5. Restoration of motion blurred image with Lucy-Richardson algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jing; Liu, Zhao Hui; Zhou, Liang

    2015-10-01

    Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR's). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.

  6. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  7. A memory-efficient staining algorithm in 3D seismic modelling and imaging

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofeng; Yang, Lu

    2017-08-01

    The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.

  8. Computational Ghost Imaging for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.

    2012-01-01

    This work relates to the generic problem of remote active imaging; that is, a source illuminates a target of interest and a receiver collects the scattered light off the target to obtain an image. Conventional imaging systems consist of an imaging lens and a high-resolution detector array [e.g., a CCD (charge coupled device) array] to register the image. However, conventional imaging systems for remote sensing require high-quality optics and need to support large detector arrays and associated electronics. This results in suboptimal size, weight, and power consumption. Computational ghost imaging (CGI) is a computational alternative to this traditional imaging concept that has a very simple receiver structure. In CGI, the transmitter illuminates the target with a modulated light source. A single-pixel (bucket) detector collects the scattered light. Then, via computation (i.e., postprocessing), the receiver can reconstruct the image using the knowledge of the modulation that was projected onto the target by the transmitter. This way, one can construct a very simple receiver that, in principle, requires no lens to image a target. Ghost imaging is a transverse imaging modality that has been receiving much attention owing to a rich interconnection of novel physical characteristics and novel signal processing algorithms suitable for active computational imaging. The original ghost imaging experiments consisted of two correlated optical beams traversing distinct paths and impinging on two spatially-separated photodetectors: one beam interacts with the target and then illuminates on a single-pixel (bucket) detector that provides no spatial resolution, whereas the other beam traverses an independent path and impinges on a high-resolution camera without any interaction with the target. The term ghost imaging was coined soon after the initial experiments were reported, to emphasize the fact that by cross-correlating two photocurrents, one generates an image of the target. In CGI, the measurement obtained from the reference arm (with the high-resolution detector) is replaced by a computational derivation of the measurement-plane intensity profile of the reference-arm beam. The algorithms applied to computational ghost imaging have diversified beyond simple correlation measurements, and now include modern reconstruction algorithms based on compressive sensing.

  9. An Adaptive Cross-Correlation Algorithm for Extended-Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    This viewgraph presentation reviews the Adaptive Cross-Correlation (ACC) Algorithm for extended scene-Shack Hartmann wavefront (WF) sensing. A Shack-Hartmann sensor places a lenslet array at a plane conjugate to the WF error source. Each sub-aperture lenslet samples the WF in the corresponding patch of the WF. A description of the ACC algorithm is included. The ACC has several benefits; amongst them are: ACC requires only about 4 image-shifting iterations to achieve 0.01 pixel accuracy and ACC is insensitive to both background light and noise much more robust than centroiding,

  10. 3D reconstruction from multi-view VHR-satellite images in MicMac

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur

    2018-05-01

    This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.

  11. Initial Navigation Alignment of Optical Instruments on GOES-R

    NASA Technical Reports Server (NTRS)

    Isaacson, Peter J.; DeLuccia, Frank J.; Reth, Alan D.; Igli, David A.; Carter, Delano R.

    2016-01-01

    Post-launch alignment errors for the Advanced Baseline Imager (ABI) and Geospatial Lightning Mapper (GLM) on GOES-R may be too large for the image navigation and registration (INR) processing algorithms to function without an initial adjustment to calibration parameters. We present an approach that leverages a combination of user-selected image-to-image tie points and image correlation algorithms to estimate this initial launch-induced offset and calculate adjustments to the Line of Sight Motion Compensation (LMC) parameters. We also present an approach to generate synthetic test images, to which shifts and rotations of known magnitude are applied. Results of applying the initial alignment tools to a subset of these synthetic test images are presented. The results for both ABI and GLM are within the specifications established for these tools, and indicate that application of these tools during the post-launch test (PLT) phase of GOES-R operations will enable the automated INR algorithms for both instruments to function as intended.

  12. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.

    PubMed

    Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P

    2016-12-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively. Regularization with smoothing priors could suppress these noise patterns at the cost of reduced image contrast. The mean N% was 6.4% and 6.8% for low count QSP and MRP MAP reconstructed images. Alternatively, regularization with an anatomical Bowhser prior resulted in sharp images with high contrast, limited image distortion, and low N% of 8.3% in low count images, although some image artifacts did occur. Analysis of clinical images suggested that the same effects occur in clinical imaging. Image quality of low count SPECT acquisitions reconstructed with modern 3DOSEM algorithms is deteriorated by the occurrence of correlated noise patterns and image distortions. The artifacts observed in the phantom experiments can also occur in clinical imaging. Copyright © 2015. Published by Elsevier GmbH.

  13. Wideband RELAX and wideband CLEAN for aeroacoustic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  14. Wideband RELAX and wideband CLEAN for aeroacoustic imaging.

    PubMed

    Wang, Yanwei; Li, Jian; Stoica, Petre; Sheplak, Mark; Nishida, Toshikazu

    2004-02-01

    Microphone arrays can be used for acoustic source localization and characterization in wind tunnel testing. In this paper, the wideband RELAX (WB-RELAX) and the wideband CLEAN (WB-CLEAN) algorithms are presented for aeroacoustic imaging using an acoustic array. WB-RELAX is a parametric approach that can be used efficiently for point source imaging without the sidelobe problems suffered by the delay-and-sum beamforming approaches. WB-CLEAN does not have sidelobe problems either, but it behaves more like a nonparametric approach and can be used for both point source and distributed source imaging. Moreover, neither of the algorithms suffers from the severe performance degradations encountered by the adaptive beamforming methods when the number of snapshots is small and/or the sources are highly correlated or coherent with each other. A two-step optimization procedure is used to implement the WB-RELAX and WB-CLEAN algorithms efficiently. The performance of WB-RELAX and WB-CLEAN is demonstrated by applying them to measured data obtained at the NASA Langley Quiet Flow Facility using a small aperture directional array (SADA). Somewhat surprisingly, using these approaches, not only were the parameters of the dominant source accurately determined, but a highly correlated multipath of the dominant source was also discovered.

  15. Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.

    PubMed

    Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban

    2015-07-20

    In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.

  16. Maximum-likelihood-based extended-source spatial acquisition and tracking for planetary optical communications

    NASA Astrophysics Data System (ADS)

    Tsou, Haiping; Yan, Tsun-Yee

    1999-04-01

    This paper describes an extended-source spatial acquisition and tracking scheme for planetary optical communications. This scheme uses the Sun-lit Earth image as the beacon signal, which can be computed according to the current Sun-Earth-Probe angle from a pre-stored Earth image or a received snapshot taken by other Earth-orbiting satellite. Onboard the spacecraft, the reference image is correlated in the transform domain with the received image obtained from a detector array, which is assumed to have each of its pixels corrupted by an independent additive white Gaussian noise. The coordinate of the ground station is acquired and tracked, respectively, by an open-loop acquisition algorithm and a closed-loop tracking algorithm derived from the maximum likelihood criterion. As shown in the paper, the optimal spatial acquisition requires solving two nonlinear equations, or iteratively solving their linearized variants, to estimate the coordinate when translation in the relative positions of onboard and ground transceivers is considered. Similar assumption of linearization leads to the closed-loop spatial tracking algorithm in which the loop feedback signals can be derived from the weighted transform-domain correlation. Numerical results using a sample Sun-lit Earth image demonstrate that sub-pixel resolutions can be achieved by this scheme in a high disturbance environment.

  17. Constraints as a destriping tool for Hires images

    NASA Technical Reports Server (NTRS)

    Cao, YU; Prince, Thomas A.

    1994-01-01

    Images produced from the Maximum Correlation Method sometimes suffer from visible striping artifacts, especially for areas of extended sources. Possible causes are different baseline levels and calibration errors in the detectors. We incorporated these factors into the MCM algorithm, and tested the effects of different constraints on the output image. The result shows significant visual improvement over the standard MCM Method. In some areas the new images show intelligible structures that are otherwise corrupted by striping artifacts, and the removal of these artifacts could enhance performance of object classification algorithms. The constraints were also tested on low surface brightness areas, and were found to be effective in reducing the noise level.

  18. Internal respiratory surrogate in multislice 4D CT using a combination of Fourier transform and anatomical features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hui, Cheukkai; Suh, Yelin; Robertson, Daniel

    Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signalsmore » is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt{sub ins}) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt{sub ins} were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt{sub ins} was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt{sub ins} greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective sorting of 4D CT data. The algorithm is completely automatic and requires very little processing time. The algorithm is cost efficient and can be easily adopted for everyday clinical use.« less

  19. A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Becker, D.; Cain, S.

    Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.

  20. Assessment of global longitudinal strain using standardized myocardial deformation imaging: a modality independent software approach.

    PubMed

    Riffel, Johannes H; Keller, Marius G P; Aurich, Matthias; Sander, Yannick; Andre, Florian; Giusca, Sorin; Aus dem Siepen, Fabian; Seitz, Sebastian; Galuschky, Christian; Korosoglou, Grigorios; Mereles, Derliz; Katus, Hugo A; Buss, Sebastian J

    2015-07-01

    Myocardial deformation measurement is superior to left ventricular ejection fraction in identifying early changes in myocardial contractility and prediction of cardiovascular outcome. The lack of standardization hinders its clinical implementation. The aim of the study is to investigate a novel standardized deformation imaging approach based on the feature tracking algorithm for the assessment of global longitudinal (GLS) and global circumferential strain (GCS) in echocardiography and cardiac magnetic resonance imaging (CMR). 70 subjects undergoing CMR were consecutively investigated with echocardiography within a median time of 30 min. GLS and GCS were analyzed with a post-processing software incorporating the same standardized algorithm for both modalities. Global strain was defined as the relative shortening of the whole endocardial contour length and calculated according to the strain formula. Mean GLS values were -16.2 ± 5.3 and -17.3 ± 5.3 % for echocardiography and CMR, respectively. GLS did not differ significantly between the two imaging modalities, which showed strong correlation (r = 0.86), a small bias (-1.1 %) and narrow 95 % limits of agreement (LOA ± 5.4 %). Mean GCS values were -17.9 ± 6.3 and -24.4 ± 7.8 % for echocardiography and CMR, respectively. GCS was significantly underestimated by echocardiography (p < 0.001). A weaker correlation (r = 0.73), a higher bias (-6.5 %) and wider LOA (± 10.5 %) were observed for GCS. GLS showed a strong correlation (r = 0.92) when image quality was good, while correlation dropped to r = 0.82 with poor acoustic windows in echocardiography. GCS assessment revealed only a strong correlation (r = 0.87) when echocardiographic image quality was good. No significant differences for GLS between two different echocardiographic vendors could be detected. Quantitative assessment of GLS using a standardized software algorithm allows the direct comparison of values acquired irrespective of the imaging modality. GLS may, therefore, serve as a reliable parameter for the assessment of global left ventricular function in clinical routine besides standard evaluation of the ejection fraction.

  1. TH-AB-202-01: Daily Lung Tumor Motion Characterization On EPIDs Using a Markerless Tiling Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozario, T; University of Texas at Dallas, Richardson, TX; Chiu, T

    Purpose: Tracking lung tumor motion in real time allows for target dose escalation while simultaneously reducing dose to sensitive structures, thus increasing local control without increasing toxicity. We present a novel intra-fractional markerless lung tumor tracking algorithm using MV treatment beam images acquired during treatment delivery. Strong signals superimposed on the tumor significantly reduced the soft tissue resolution; while different imaging modalities involved introduce global imaging discrepancies. This reduced the comparison accuracies. A simple yet elegant Tiling algorithm is reported to overcome the aforementioned issues. Methods: MV treatment beam images were acquired continuously in beam’s eye view (BEV) by anmore » electronic portal imaging device (EPID) during treatment and analyzed to obtain tumor positions on every frame. Every frame of the MV image was simulated by a composite of two components with separate digitally reconstructed radiographs (DRRs): all non-moving structures and the tumor. This Titling algorithm divides the global composite DRR and the corresponding MV projection into sub-images called tiles. Rigid registration is performed independently on tile-pairs in order to improve local soft tissue resolution. This enables the composite DRR to be transformed accurately to match the MV projection and attain a high correlation value through a pixel-based linear transformation. The highest cumulative correlation for all tile-pairs achieved over a user-defined search range indicates the 2-D coordinates of the tumor location on the MV projection. Results: This algorithm was successfully applied to cine-mode BEV images acquired during two SBRT plans delivered five times with different motion patterns to each of two phantoms. Approximately 15000 beam’s eye view images were analyzed and tumor locations were successfully identified on every projection with a maximum/average error of 1.8 mm / 1.0 mm. Conclusion: Despite the presence of strong anatomical signal overlapping with tumor images, this markerless detection algorithm accurately tracks intrafractional lung tumor motions. This project is partially supported by an Elekta research grant.« less

  2. Intensity correlation imaging with sunlight-like source

    NASA Astrophysics Data System (ADS)

    Wang, Wentao; Tang, Zhiguo; Zheng, Huaibin; Chen, Hui; Yuan, Yuan; Liu, Jinbin; Liu, Yanyan; Xu, Zhuo

    2018-05-01

    We show a method of intensity correlation imaging of targets illuminated by a sunlight-like source both theoretically and experimentally. With a Faraday anomalous dispersion optical filter (FADOF), we have modulated the coherence time of a thermal source up to 0.167 ns. And we carried out measurements of temporal and spatial correlations, respectively, with an intensity interferometer setup. By skillfully using the even Fourier fitting on the very sparse sampling data, the images of targets are successfully reconstructed from the low signal-noise-ratio(SNR) interference pattern by applying an iterative phase retrieval algorithm. The resulting imaging quality is as well as the one obtained by the theoretical fitting. The realization of such a case will bring this technique closer to geostationary satellite imaging illuminated by sunlight.

  3. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    NASA Astrophysics Data System (ADS)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  4. Methods for investigating the local spatial anisotropy and the preferred orientation of cones in adaptive optics retinal images

    PubMed Central

    Cooper, Robert F.; Lombardo, Marco; Carroll, Joseph; Sloan, Kenneth R.; Lombardo, Giuseppe

    2016-01-01

    The ability to non-invasively image the cone photoreceptor mosaic holds significant potential as a diagnostic for retinal disease. Central to the realization of this potential is the development of sensitive metrics for characterizing the organization of the mosaic. Here we evaluated previously-described (Pum et al., 1990) and newly-developed (Fourier- and Radon-based) methods of measuring cone orientation in both simulated and real images of the parafoveal cone mosaic. The proposed algorithms correlated well across both simulated and real mosaics, suggesting that each algorithm would provide an accurate description of individual photoreceptor orientation. Despite the high agreement between algorithms, each performed differently in response to image intensity variation and cone coordinate jitter. The integration property of the Fourier transform allowed the Fourier-based method to be resistant to cone coordinate jitter and perform the most robustly of all three algorithms. Conversely, when there is good image quality but unreliable cone identification, the Radon algorithm performed best. Finally, in cases where both the image and cone coordinate reliability was excellent, the method of Pum et al. (1990) performed best. These descriptors are complementary to conventional descriptive metrics of the cone mosaic, such as cell density and spacing, and have the potential to aid in the detection of photoreceptor pathology. PMID:27484961

  5. Imaging of downward-looking linear array SAR using three-dimensional spatial smoothing MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Siqian; Kuang, Gangyao

    2014-10-01

    In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.

  6. Adaptive radiotherapy for NSCLC patients: utilizing the principle of energy conservation to evaluate dose mapping operations

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Chetty, Indrin J.

    2017-06-01

    Tumor regression during the course of fractionated radiotherapy confounds the ability to accurately estimate the total dose delivered to tumor targets. Here we present a new criterion to improve the accuracy of image intensity-based dose mapping operations for adaptive radiotherapy for patients with non-small cell lung cancer (NSCLC). Six NSCLC patients were retrospectively investigated in this study. An image intensity-based B-spline registration algorithm was used for deformable image registration (DIR) of weekly CBCT images to a reference image. The resultant displacement vector fields were employed to map the doses calculated on weekly images to the reference image. The concept of energy conservation was introduced as a criterion to evaluate the accuracy of the dose mapping operations. A finite element method (FEM)-based mechanical model was implemented to improve the performance of the B-Spline-based registration algorithm in regions involving tumor regression. For the six patients, deformed tumor volumes changed by 21.2  ±  15.0% and 4.1  ±  3.7% on average for the B-Spline and the FEM-based registrations performed from fraction 1 to fraction 21, respectively. The energy deposited in the gross tumor volume (GTV) was 0.66 Joules (J) per fraction on average. The energy derived from the fractional dose reconstructed by the B-spline and FEM-based DIR algorithms in the deformed GTV’s was 0.51 J and 0.64 J, respectively. Based on landmark comparisons for the 6 patients, mean error for the FEM-based DIR algorithm was 2.5  ±  1.9 mm. The cross-correlation coefficient between the landmark-measured displacement error and the loss of radiation energy was  -0.16 for the FEM-based algorithm. To avoid uncertainties in measuring distorted landmarks, the B-Spline-based registrations were compared to the FEM registrations, and their displacement differences equal 4.2  ±  4.7 mm on average. The displacement differences were correlated to their relative loss of radiation energy with a cross-correlation coefficient equal to 0.68. Based on the principle of energy conservation, the FEM-based mechanical model has a better performance than the B-Spline-based DIR algorithm. It is recommended that the principle of energy conservation be incorporated into a comprehensive QA protocol for adaptive radiotherapy.

  7. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  8. Three-dimensional lung tumor segmentation from x-ray computed tomography using sparse field active models.

    PubMed

    Awad, Joseph; Owrangi, Amir; Villemaire, Lauren; O'Riordan, Elaine; Parraga, Grace; Fenster, Aaron

    2012-02-01

    Manual segmentation of lung tumors is observer dependent and time-consuming but an important component of radiology and radiation oncology workflow. The objective of this study was to generate an automated lung tumor measurement tool for segmentation of pulmonary metastatic tumors from x-ray computed tomography (CT) images to improve reproducibility and decrease the time required to segment tumor boundaries. The authors developed an automated lung tumor segmentation algorithm for volumetric image analysis of chest CT images using shape constrained Otsu multithresholding (SCOMT) and sparse field active surface (SFAS) algorithms. The observer was required to select the tumor center and the SCOMT algorithm subsequently created an initial surface that was deformed using level set SFAS to minimize the total energy consisting of mean separation, edge, partial volume, rolling, distribution, background, shape, volume, smoothness, and curvature energies. The proposed segmentation algorithm was compared to manual segmentation whereby 21 tumors were evaluated using one-dimensional (1D) response evaluation criteria in solid tumors (RECIST), two-dimensional (2D) World Health Organization (WHO), and 3D volume measurements. Linear regression goodness-of-fit measures (r(2) = 0.63, p < 0.0001; r(2) = 0.87, p < 0.0001; and r(2) = 0.96, p < 0.0001), and Pearson correlation coefficients (r = 0.79, p < 0.0001; r = 0.93, p < 0.0001; and r = 0.98, p < 0.0001) for 1D, 2D, and 3D measurements, respectively, showed significant correlations between manual and algorithm results. Intra-observer intraclass correlation coefficients (ICC) demonstrated high reproducibility for algorithm (0.989-0.995, 0.996-0.997, and 0.999-0.999) and manual measurements (0.975-0.993, 0.985-0.993, and 0.980-0.992) for 1D, 2D, and 3D measurements, respectively. The intra-observer coefficient of variation (CV%) was low for algorithm (3.09%-4.67%, 4.85%-5.84%, and 5.65%-5.88%) and manual observers (4.20%-6.61%, 8.14%-9.57%, and 14.57%-21.61%) for 1D, 2D, and 3D measurements, respectively. The authors developed an automated segmentation algorithm requiring only that the operator select the tumor to measure pulmonary metastatic tumors in 1D, 2D, and 3D. Algorithm and manual measurements were significantly correlated. Since the algorithm segmentation involves selection of a single seed point, it resulted in reduced intra-observer variability and decreased time, for making the measurements.

  9. Dehazed Image Quality Assessment by Haze-Line Theory

    NASA Astrophysics Data System (ADS)

    Song, Yingchao; Luo, Haibo; Lu, Rongrong; Ma, Junkai

    2017-06-01

    Images captured in bad weather suffer from low contrast and faint color. Recently, plenty of dehazing algorithms have been proposed to enhance visibility and restore color. However, there is a lack of evaluation metrics to assess the performance of these algorithms or rate them. In this paper, an indicator of contrast enhancement is proposed basing on the newly proposed haze-line theory. The theory assumes that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. The presence of haze makes each color cluster forms a line, which is named haze-line. By using these haze-lines, we assess performance of dehazing algorithms designed to enhance the contrast by measuring the inter-cluster deviations between different colors of dehazed image. Experimental results demonstrated that the proposed Color Contrast (CC) index correlates well with human judgments of image contrast taken in a subjective test on various scene of dehazed images and performs better than state-of-the-art metrics.

  10. Correlation-coefficient-based fast template matching through partial elimination.

    PubMed

    Mahmood, Arif; Khan, Sohaib

    2012-04-01

    Partial computation elimination techniques are often used for fast template matching. At a particular search location, computations are prematurely terminated as soon as it is found that this location cannot compete with an already known best match location. Due to the nonmonotonic growth pattern of the correlation-based similarity measures, partial computation elimination techniques have been traditionally considered inapplicable to speed up these measures. In this paper, we show that partial elimination techniques may be applied to a correlation coefficient by using a monotonic formulation, and we propose basic-mode and extended-mode partial correlation elimination algorithms for fast template matching. The basic-mode algorithm is more efficient on small template sizes, whereas the extended mode is faster on medium and larger templates. We also propose a strategy to decide which algorithm to use for a given data set. To achieve a high speedup, elimination algorithms require an initial guess of the peak correlation value. We propose two initialization schemes including a coarse-to-fine scheme for larger templates and a two-stage technique for small- and medium-sized templates. Our proposed algorithms are exact, i.e., having exhaustive equivalent accuracy, and are compared with the existing fast techniques using real image data sets on a wide variety of template sizes. While the actual speedups are data dependent, in most cases, our proposed algorithms have been found to be significantly faster than the other algorithms.

  11. Semi-Automatic Extraction Algorithm for Images of the Ciliary Muscle

    PubMed Central

    Kao, Chiu-Yen; Richdale, Kathryn; Sinnott, Loraine T.; Ernst, Lauren E.; Bailey, Melissa D.

    2011-01-01

    Purpose To development and evaluate a semi-automatic algorithm for segmentation and morphological assessment of the dimensions of the ciliary muscle in Visante™ Anterior Segment Optical Coherence Tomography images. Methods Geometric distortions in Visante images analyzed as binary files were assessed by imaging an optical flat and human donor tissue. The appropriate pixel/mm conversion factor to use for air (n = 1) was estimated by imaging calibration spheres. A semi-automatic algorithm was developed to extract the dimensions of the ciliary muscle from Visante images. Measurements were also made manually using Visante software calipers. Interclass correlation coefficients (ICC) and Bland-Altman analyses were used to compare the methods. A multilevel model was fitted to estimate the variance of algorithm measurements that was due to differences within- and between-examiners in scleral spur selection versus biological variability. Results The optical flat and the human donor tissue were imaged and appeared without geometric distortions in binary file format. Bland-Altman analyses revealed that caliper measurements tended to underestimate ciliary muscle thickness at 3 mm posterior to the scleral spur in subjects with the thickest ciliary muscles (t = 3.6, p < 0.001). The percent variance due to within- or between-examiner differences in scleral spur selection was found to be small (6%) when compared to the variance due to biological difference across subjects (80%). Using the mean of measurements from three images achieved an estimated ICC of 0.85. Conclusions The semi-automatic algorithm successfully segmented the ciliary muscle for further measurement. Using the algorithm to follow the scleral curvature to locate more posterior measurements is critical to avoid underestimating thickness measurements. This semi-automatic algorithm will allow for repeatable, efficient, and masked ciliary muscle measurements in large datasets. PMID:21169877

  12. Novel methods of imaging and analysis for the thermoregulatory sweat test.

    PubMed

    Carroll, Michael Sean; Reed, David W; Kuntz, Nancy L; Weese-Mayer, Debra Ellyn

    2018-06-07

    The thermoregulatory sweat test (TST) can be central to the identification and management of disorders affecting sudomotor function and small sensory and autonomic nerve fibers, but the cumbersome nature of the standard testing protocol has prevented its widespread adoption. A high resolution, quantitative, clean and simple assay of sweating could significantly improve identification and management of these disorders. Images from 89 clinical TSTs were analyzed retrospectively using two novel techniques. First, using the standard indicator powder, skin surface sweat distributions were determined algorithmically for each patient. Second, a fundamentally novel method using thermal imaging of forced evaporative cooling was evaluated through comparison with the standard technique. Correlation and receiver operating characteristic analyses were used to determine the degree of match between these methods, and the potential limits of thermal imaging were examined through cumulative analysis of all studied patients. Algorithmic encoding of sweating and non-sweating regions produces a more objective analysis for clinical decision making. Additionally, results from the forced cooling method correspond well with those from indicator powder imaging, with a correlation across spatial regions of -0.78 (CI: -0.84 to -0.71). The method works similarly across body regions, and frame-by-frame analysis suggests the ability to identify sweating regions within about 1 second of imaging. While algorithmic encoding can enhance the standard sweat testing protocol, thermal imaging with forced evaporative cooling can dramatically improve the TST by making it less time-consuming and more patient-friendly than the current approach.

  13. A novel algorithm for thermal image encryption.

    PubMed

    Hussain, Iqtadar; Anees, Amir; Algarni, Abdulmohsen

    2018-04-16

    Thermal images play a vital character at nuclear plants, Power stations, Forensic labs biological research, and petroleum products extraction. Safety of thermal images is very important. Image data has some unique features such as intensity, contrast, homogeneity, entropy and correlation among pixels that is why somehow image encryption is trickier as compare to other encryptions. With conventional image encryption schemes it is normally hard to handle these features. Therefore, cryptographers have paid attention to some attractive properties of the chaotic maps such as randomness and sensitivity to build up novel cryptosystems. That is why, recently proposed image encryption techniques progressively more depends on the application of chaotic maps. This paper proposed an image encryption algorithm based on Chebyshev chaotic map and S8 Symmetric group of permutation based substitution boxes. Primarily, parameters of chaotic Chebyshev map are chosen as a secret key to mystify the primary image. Then, the plaintext image is encrypted by the method generated from the substitution boxes and Chebyshev map. By this process, we can get a cipher text image that is perfectly twisted and dispersed. The outcomes of renowned experiments, key sensitivity tests and statistical analysis confirm that the proposed algorithm offers a safe and efficient approach for real-time image encryption.

  14. Optimization of image processing algorithms on mobile platforms

    NASA Astrophysics Data System (ADS)

    Poudel, Pramod; Shirvaikar, Mukul

    2011-03-01

    This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.

  15. Comparison of Unsupervised Vegetation Classification Methods from Vhr Images after Shadows Removal by Innovative Algorithms

    NASA Astrophysics Data System (ADS)

    Movia, A.; Beinat, A.; Crosilla, F.

    2015-04-01

    The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.

  16. Accuracy and Reliability Assessment of CT and MR Perfusion Analysis Software Using a Digital Phantom

    PubMed Central

    Christensen, Soren; Sasaki, Makoto; Østergaard, Leif; Shirato, Hiroki; Ogasawara, Kuniaki; Wintermark, Max; Warach, Steven

    2013-01-01

    Purpose: To design a digital phantom data set for computed tomography (CT) perfusion and perfusion-weighted imaging on the basis of the widely accepted tracer kinetic theory in which the true values of cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), and tracer arrival delay are known and to evaluate the accuracy and reliability of postprocessing programs using this digital phantom. Materials and Methods: A phantom data set was created by generating concentration-time curves reflecting true values for CBF (2.5–87.5 mL/100 g per minute), CBV (1.0–5.0 mL/100 g), MTT (3.4–24 seconds), and tracer delays (0–3.0 seconds). These curves were embedded in human brain images. The data were analyzed by using 13 algorithms each for CT and magnetic resonance (MR), including five commercial vendors and five academic programs. Accuracy was assessed by using the Pearson correlation coefficient (r) for true values. Delay-, MTT-, or CBV-dependent errors and correlations between time to maximum of residue function (Tmax) were also evaluated. Results: In CT, CBV was generally well reproduced (r > 0.9 in 12 algorithms), but not CBF and MTT (r > 0.9 in seven and four algorithms, respectively). In MR, good correlation (r > 0.9) was observed in one-half of commercial programs, while all academic algorithms showed good correlations for all parameters. Most algorithms had delay-dependent errors, especially for commercial software, as well as CBV dependency for CBF or MTT calculation and MTT dependency for CBV calculation. Correlation was good in Tmax except for one algorithm. Conclusion: The digital phantom readily evaluated the accuracy and characteristics of the CT and MR perfusion analysis software. All commercial programs had delay-induced errors and/or insufficient correlations with true values, while academic programs for MR showed good correlations with true values. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112618/-/DC1 PMID:23220899

  17. Respiratory motion compensation algorithm of ultrasound hepatic perfusion data acquired in free-breathing

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Zhang, Xuming; Chen, Guangxie; Weng, Fei; Ding, Mingyue

    2013-10-01

    Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was detected after normalizing the motion amplitude and determining the image subsequences of the original image sequences. The image subsequences were registered by the block matching method using cross-correlation as the similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest) was reduced from 10.9278 +/- 6.2756 to 5.1644 +/- 3.3431 after compensating.

  18. Single image super resolution algorithm based on edge interpolation in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  19. A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.

    PubMed

    Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-04-07

    Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.

  20. Multiple Hypothesis Correlation for Space Situational Awareness

    DTIC Science & Technology

    2011-08-29

    formulations with anti-aliasing through hybrid approaches such as the Drizzle algorithm [43] all the way up through to image superresolution techniques. Most... superresolution techniques. Second, given a set of images, either directly from the sensor or preprocessed using the above techniques, we showed how

  1. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  2. PCA-LBG-based algorithms for VQ codebook generation

    NASA Astrophysics Data System (ADS)

    Tsai, Jinn-Tsong; Yang, Po-Yuan

    2015-04-01

    Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.

  3. A comparison of neural network and fuzzy clustering techniques in segmenting magnetic resonance images of the brain

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bensaid, Amine M.; Clarke, Laurence P.; Velthuizen, Robert P.; Silbiger, Martin S.; Bezdek, James C.

    1992-01-01

    Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms and a supervised computational neural network, a dynamic multilayered perception trained with the cascade correlation learning algorithm. Initial clinical results are presented on both normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. However, for a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed.

  4. Relation between brain architecture and mathematical ability in children: a DBM study.

    PubMed

    Han, Zhaoying; Davis, Nicole; Fuchs, Lynn; Anderson, Adam W; Gore, John C; Dawant, Benoit M

    2013-12-01

    Population-based studies indicate that between 5 and 9 percent of US children exhibit significant deficits in mathematical reasoning, yet little is understood about the brain morphological features related to mathematical performances. In this work, deformation-based morphometry (DBM) analyses have been performed on magnetic resonance images of the brains of 79 third graders to investigate whether there is a correlation between brain morphological features and mathematical proficiency. Group comparison was also performed between Math Difficulties (MD-worst math performers) and Normal Controls (NC), where each subgroup consists of 20 age and gender matched subjects. DBM analysis is based on the analysis of the deformation fields generated by non-rigid registration algorithms, which warp the individual volumes to a common space. To evaluate the effect of registration algorithms on DBM results, five nonrigid registration algorithms have been used: (1) the Adaptive Bases Algorithm (ABA); (2) the Image Registration Toolkit (IRTK); (3) the FSL Nonlinear Image Registration Tool; (4) the Automatic Registration Tool (ART); and (5) the normalization algorithm available in SPM8. The deformation field magnitude (DFM) was used to measure the displacement at each voxel, and the Jacobian determinant (JAC) was used to quantify local volumetric changes. Results show there are no statistically significant volumetric differences between the NC and the MD groups using JAC. However, DBM analysis using DFM found statistically significant anatomical variations between the two groups around the left occipital-temporal cortex, left orbital-frontal cortex, and right insular cortex. Regions of agreement between at least two algorithms based on voxel-wise analysis were used to define Regions of Interest (ROIs) to perform an ROI-based correlation analysis on all 79 volumes. Correlations between average DFM values and standard mathematical scores over these regions were found to be significant. We also found that the choice of registration algorithm has an impact on DBM-based results, so we recommend using more than one algorithm when conducting DBM studies. To the best of our knowledge, this is the first study that uses DBM to investigate brain anatomical features related to mathematical performance in a relatively large population of children. © 2013.

  5. Image correlation microscopy for uniform illumination.

    PubMed

    Gaborski, T R; Sealander, M N; Ehrenberg, M; Waugh, R E; McGrath, J L

    2010-01-01

    Image cross-correlation microscopy is a technique that quantifies the motion of fluorescent features in an image by measuring the temporal autocorrelation function decay in a time-lapse image sequence. Image cross-correlation microscopy has traditionally employed laser-scanning microscopes because the technique emerged as an extension of laser-based fluorescence correlation spectroscopy. In this work, we show that image correlation can also be used to measure fluorescence dynamics in uniform illumination or wide-field imaging systems and we call our new approach uniform illumination image correlation microscopy. Wide-field microscopy is not only a simpler, less expensive imaging modality, but it offers the capability of greater temporal resolution over laser-scanning systems. In traditional laser-scanning image cross-correlation microscopy, lateral mobility is calculated from the temporal de-correlation of an image, where the characteristic length is the illuminating laser beam width. In wide-field microscopy, the diffusion length is defined by the feature size using the spatial autocorrelation function. Correlation function decay in time occurs as an object diffuses from its original position. We show that theoretical and simulated comparisons between Gaussian and uniform features indicate the temporal autocorrelation function depends strongly on particle size and not particle shape. In this report, we establish the relationships between the spatial autocorrelation function feature size, temporal autocorrelation function characteristic time and the diffusion coefficient for uniform illumination image correlation microscopy using analytical, Monte Carlo and experimental validation with particle tracking algorithms. Additionally, we demonstrate uniform illumination image correlation microscopy analysis of adhesion molecule domain aggregation and diffusion on the surface of human neutrophils.

  6. On-line determination of pork color and intramuscular fat by computer vision

    NASA Astrophysics Data System (ADS)

    Liao, Yi-Tao; Fan, Yu-Xia; Wu, Xue-Qian; Xie, Li-juan; Cheng, Fang

    2010-04-01

    In this study, the application potential of computer vision in on-line determination of CIE L*a*b* and content of intramuscular fat (IMF) of pork was evaluated. Images of pork chop from 211 pig carcasses were captured while samples were on a conveyor belt at the speed of 0.25 m•s-1 to simulate the on-line environment. CIE L*a*b* and IMF content were measured with colorimeter and chemical extractor as reference. The KSW algorithm combined with region selection was employed in eliminating the surrounding fat of longissimus dorsi muscle (MLD). RGB values of the pork were counted and five methods were applied for transforming RGB values to CIE L*a*b* values. The region growing algorithm with multiple seed points was applied to mask out the IMF pixels within the intensity corrected images. The performances of the proposed algorithms were verified by comparing the measured reference values and the quality characteristics obtained by image processing. MLD region of six samples could not be identified using the KSW algorithm. Intensity nonuniformity of pork surface in the image can be eliminated efficiently, and IMF region of three corrected images failed to be extracted. Given considerable variety of color and complexity of the pork surface, CIE L*, a* and b* color of MLD could be predicted with correlation coefficients of 0.84, 0.54 and 0.47 respectively, and IMF content could be determined with a correlation coefficient more than 0.70. The study demonstrated that it is feasible to evaluate CIE L*a*b* values and IMF content on-line using computer vision.

  7. Evaluating the portability of satellite derived chlorophyll-a algorithms for temperate inland lakes using airborne hyperspectral imagery and dense surface observations.

    PubMed

    Johansen, Richard; Beck, Richard; Nowosad, Jakub; Nietch, Christopher; Xu, Min; Shu, Song; Yang, Bo; Liu, Hongxing; Emery, Erich; Reif, Molly; Harwood, Joseph; Young, Jade; Macke, Dana; Martin, Mark; Stillings, Garrett; Stumpf, Richard; Su, Haibin

    2018-06-01

    This study evaluated the performances of twenty-nine algorithms that use satellite-based spectral imager data to derive estimates of chlorophyll-a concentrations that, in turn, can be used as an indicator of the general status of algal cell densities and the potential for a harmful algal bloom (HAB). The performance assessment was based on making relative comparisons between two temperate inland lakes: Harsha Lake (7.99 km 2 ) in Southwest Ohio and Taylorsville Lake (11.88 km 2 ) in central Kentucky. Of interest was identifying algorithm-imager combinations that had high correlation with coincident chlorophyll-a surface observations for both lakes, as this suggests portability for regional HAB monitoring. The spectral data utilized to estimate surface water chlorophyll-a concentrations were derived from the airborne Compact Airborne Spectral Imager (CASI) 1500 hyperspectral imager, that was then used to derive synthetic versions of currently operational satellite-based imagers using spatial resampling and spectral binning. The synthetic data mimics the configurations of spectral imagers on current satellites in earth's orbit including, WorldView-2/3, Sentinel-2, Landsat-8, Moderate-resolution Imaging Spectroradiometer (MODIS), and Medium Resolution Imaging Spectrometer (MERIS). High correlations were found between the direct measurement and the imagery-estimated chlorophyll-a concentrations at both lakes. The results determined that eleven out of the twenty-nine algorithms were considered portable, with r 2 values greater than 0.5 for both lakes. Even though the two lakes are different in terms of background water quality, size and shape, with Taylorsville being generally less impaired, larger, but much narrower throughout, the results support the portability of utilizing a suite of certain algorithms across multiple sensors to detect potential algal blooms through the use of chlorophyll-a as a proxy. Furthermore, the strong performance of the Sentinel-2 algorithms is exceptionally promising, due to the recent launch of the second satellite in the constellation, which will provide higher temporal resolution for temperate inland water bodies. Additionally, scripts were written for the open-source statistical software R that automate much of the spectral data processing steps. This allows for the simultaneous consideration of numerous algorithms across multiple imagers over an expedited time frame for the near real-time monitoring required for detecting algal blooms and mitigating their adverse impacts. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    PubMed

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  9. Fully Automated Quantitative Estimation of Volumetric Breast Density from Digital Breast Tomosynthesis Images: Preliminary Results and Comparison with Digital Mammography and MR Imaging

    PubMed Central

    Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.

    2016-01-01

    Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909

  10. A novel image encryption algorithm based on chaos maps with Markov properties

    NASA Astrophysics Data System (ADS)

    Liu, Quan; Li, Pei-yue; Zhang, Ming-chao; Sui, Yong-xin; Yang, Huai-jiang

    2015-02-01

    In order to construct high complexity, secure and low cost image encryption algorithm, a class of chaos with Markov properties was researched and such algorithm was also proposed. The kind of chaos has higher complexity than the Logistic map and Tent map, which keeps the uniformity and low autocorrelation. An improved couple map lattice based on the chaos with Markov properties is also employed to cover the phase space of the chaos and enlarge the key space, which has better performance than the original one. A novel image encryption algorithm is constructed on the new couple map lattice, which is used as a key stream generator. A true random number is used to disturb the key which can dynamically change the permutation matrix and the key stream. From the experiments, it is known that the key stream can pass SP800-22 test. The novel image encryption can resist CPA and CCA attack and differential attack. The algorithm is sensitive to the initial key and can change the distribution the pixel values of the image. The correlation of the adjacent pixels can also be eliminated. When compared with the algorithm based on Logistic map, it has higher complexity and better uniformity, which is nearer to the true random number. It is also efficient to realize which showed its value in common use.

  11. Online hyperspectral imaging system for evaluating quality of agricultural products

    NASA Astrophysics Data System (ADS)

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk

    2017-06-01

    The consumption of fresh-cut agricultural produce in Korea has been growing. The browning of fresh-cut vegetables that occurs during storage and foreign substances such as worms and slugs are some of the main causes of consumers' concerns with respect to safety and hygiene. The purpose of this study is to develop an on-line system for evaluating quality of agricultural products using hyperspectral imaging technology. The online evaluation system with single visible-near infrared hyperspectral camera in the range of 400 nm to 1000 nm that can assess quality of both surfaces of agricultural products such as fresh-cut lettuce was designed. Algorithms to detect browning surface were developed for this system. The optimal wavebands for discriminating between browning and sound lettuce as well as between browning lettuce and the conveyor belt were investigated using the correlation analysis and the one-way analysis of variance method. The imaging algorithms to discriminate the browning lettuces were developed using the optimal wavebands. The ratio image (RI) algorithm of the 533 nm and 697 nm images (RI533/697) for abaxial surface lettuce and the ratio image algorithm (RI533/697) and subtraction image (SI) algorithm (SI538-697) for adaxial surface lettuce had the highest classification accuracies. The classification accuracy of browning and sound lettuce was 100.0% and above 96.0%, respectively, for the both surfaces. The overall results show that the online hyperspectral imaging system could potentially be used to assess quality of agricultural products.

  12. Optimalisation of remote sensing algorithm in mapping of chlorophyl-a concentration at Pasuruan coastal based on surface reflectance images of Aqua Modis

    NASA Astrophysics Data System (ADS)

    Wibisana, H.; Zainab, S.; Dara K., A.

    2018-01-01

    Chlorophyll-a is one of the parameters used to detect the presence of fish populations, as well as one of the parameters to state the quality of a water. Research on chlorophyll concentrations has been extensively investigated as well as with chlorophyll-a mapping using remote sensing satellites. Mapping of chlorophyll concentration is used to obtain an optimal picture of the condition of waters that is often used as a fishing area by the fishermen. The role of remote sensing is a technological breakthrough in broadly monitoring the condition of waters. And in the process to get a complete picture of the aquatic conditions it would be used an algorithm that can provide an image of the concentration of chlorophyll at certain points scattered in the research area of capture fisheries. Remote sensing algorithms have been widely used by researchers to detect the presence of chlorophyll content, where the channels corresponding to the mapping of chlorophyll -concentrations from Landsat 8 images are canals 4, 3 and 2. With multiple channels from Landsat-8 satellite imagery used for chlorophyll detection, optimum algorithmic search can be formulated to obtain maximum results of chlorophyll-a concentration in the research area. From the calculation of remote sensing algorithm hence can be known the suitable algorithm for condition at coast of Pasuruan, where green channel give good enough correlation equal to R2 = 0,853 with algorithm for Chlorophyll-a (mg / m3) = 0,093 (R (-0) Red - 3,7049, from this result it can be concluded that there is a good correlation of the green channel that can illustrate the concentration of chlorophyll scattered along the coast of Pasuruan

  13. Evaluation of a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography of scaphoid fixation screws.

    PubMed

    Filli, Lukas; Marcon, Magda; Scholz, Bernhard; Calcagni, Maurizio; Finkenstädt, Tim; Andreisek, Gustav; Guggenberger, Roman

    2014-12-01

    The aim of this study was to evaluate a prototype correction algorithm to reduce metal artefacts in flat detector computed tomography (FDCT) of scaphoid fixation screws. FDCT has gained interest in imaging small anatomic structures of the appendicular skeleton. Angiographic C-arm systems with flat detectors allow fluoroscopy and FDCT imaging in a one-stop procedure emphasizing their role as an ideal intraoperative imaging tool. However, FDCT imaging can be significantly impaired by artefacts induced by fixation screws. Following ethical board approval, commercially available scaphoid fixation screws were inserted into six cadaveric specimens in order to fix artificially induced scaphoid fractures. FDCT images corrected with the algorithm were compared to uncorrected images both quantitatively and qualitatively by two independent radiologists in terms of artefacts, screw contour, fracture line visibility, bone visibility, and soft tissue definition. Normal distribution of variables was evaluated using the Kolmogorov-Smirnov test. In case of normal distribution, quantitative variables were compared using paired Student's t tests. The Wilcoxon signed-rank test was used for quantitative variables without normal distribution and all qualitative variables. A p value of < 0.05 was considered to indicate statistically significant differences. Metal artefacts were significantly reduced by the correction algorithm (p < 0.001), and the fracture line was more clearly defined (p < 0.01). The inter-observer reliability was "almost perfect" (intra-class correlation coefficient 0.85, p < 0.001). The prototype correction algorithm in FDCT for metal artefacts induced by scaphoid fixation screws may facilitate intra- and postoperative follow-up imaging. Flat detector computed tomography (FDCT) is a helpful imaging tool for scaphoid fixation. The correction algorithm significantly reduces artefacts in FDCT induced by scaphoid fixation screws. This may facilitate intra- and postoperative follow-up imaging.

  14. 78 FR 20667 - Government-Owned Inventions; Availability for Licensing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ..., et al. Visualization of biological texture using correlation coefficient images. J Biomed Opt. 2006.... Development Stage: Early-stage In vitro data available Inventors: Paolo Lusso and David J. Auerbach (NIAID... algorithms to visualize regions of statistical similarity in the image have been developed. Though the...

  15. Optical multiple-image authentication based on cascaded phase filtering structure

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Alfalou, A.; Brosseau, C.

    2016-10-01

    In this study, we report on the recent developments of optical image authentication algorithms. Compared with conventional optical encryption, optical image authentication achieves more security strength because such methods do not need to recover information of plaintext totally during the decryption period. Several recently proposed authentication systems are briefly introduced. We also propose a novel multiple-image authentication system, where multiple original images are encoded into a photon-limited encoded image by using a triple-plane based phase retrieval algorithm and photon counting imaging (PCI) technique. One can only recover a noise-like image using correct keys. To check authority of multiple images, a nonlinear fractional correlation is employed to recognize the original information hidden in the decrypted results. The proposal can be implemented optically using a cascaded phase filtering configuration. Computer simulation results are presented to evaluate the performance of this proposal and its effectiveness.

  16. Improvement in Visual Target Tracking for a Mobile Robot

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Madison, Richard

    2006-01-01

    In an improvement of the visual-target-tracking software used aboard a mobile robot (rover) of the type used to explore the Martian surface, an affine-matching algorithm has been replaced by a combination of a normalized- cross-correlation (NCC) algorithm and a template-image-magnification algorithm. Although neither NCC nor template-image magnification is new, the use of both of them to increase the degree of reliability with which features can be matched is new. In operation, a template image of a target is obtained from a previous rover position, then the magnification of the template image is based on the estimated change in the target distance from the previous rover position to the current rover position (see figure). For this purpose, the target distance at the previous rover position is determined by stereoscopy, while the target distance at the current rover position is calculated from an estimate of the current pose of the rover. The template image is then magnified by an amount corresponding to the estimated target distance to obtain a best template image to match with the image acquired at the current rover position.

  17. A method for fast automated microscope image stitching.

    PubMed

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Application of 3D-MR image registration to monitor diseases around the knee joint.

    PubMed

    Takao, Masaki; Sugano, Nobuhiko; Nishii, Takashi; Miki, Hidenobu; Koyama, Tsuyoshi; Masumoto, Jun; Sato, Yoshinobu; Tamura, Shinichi; Yoshikawa, Hideki

    2005-11-01

    To estimate the accuracy and consistency of a method using a voxel-based MR image registration algorithm for precise monitoring of knee joint diseases. Rigid body transformation was calculated using a normalized cross-correlation (NCC) algorithm involving simple manual segmentation of the bone region based on its anatomical features. The accuracy of registration was evaluated using four phantoms, followed by a consistency test using MR data from the 11 patients with knee joint disease. The registration accuracy in the phantom experiment was 0.49+/-0.19 mm (SD) for the femur and 0.56+/-0.21 mm (SD) for the tibia. The consistency value in the experiment using clinical data was 0.69+/-0.25 mm (SD) for the femur and 0.77+/-0.37 mm (SD) for the tibia. These values were all smaller than a voxel (1.25 x 1.25 x 1.5 mm). The present method based on an NCC algorithm can be used to register serial MR images of the knee joint with error on the order of a sub-voxel. This method would be useful for precisely assessing therapeutic response and monitoring knee joint diseases; normalized cross-correlation; accuracy. J. Magn. Reson. Imaging 2005. (c) 2005 Wiley-Liss, Inc.

  19. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    NASA Astrophysics Data System (ADS)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  20. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    PubMed

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Penalized weighted least-squares approach for low-dose x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-03-01

    The noise of low-dose computed tomography (CT) sinogram follows approximately a Gaussian distribution with nonlinear dependence between the sample mean and variance. The noise is statistically uncorrelated among detector bins at any view angle. However the correlation coefficient matrix of data signal indicates a strong signal correlation among neighboring views. Based on above observations, Karhunen-Loeve (KL) transform can be used to de-correlate the signal among the neighboring views. In each KL component, a penalized weighted least-squares (PWLS) objective function can be constructed and optimal sinogram can be estimated by minimizing the objective function, followed by filtered backprojection (FBP) for CT image reconstruction. In this work, we compared the KL-PWLS method with an iterative image reconstruction algorithm, which uses the Gauss-Seidel iterative calculation to minimize the PWLS objective function in image domain. We also compared the KL-PWLS with an iterative sinogram smoothing algorithm, which uses the iterated conditional mode calculation to minimize the PWLS objective function in sinogram space, followed by FBP for image reconstruction. Phantom experiments show a comparable performance of these three PWLS methods in suppressing the noise-induced artifacts and preserving resolution in reconstructed images. Computer simulation concurs with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS noise reduction may have the advantage in computation for low-dose CT imaging, especially for dynamic high-resolution studies.

  2. Accelerating the reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning using CUDA.

    PubMed

    Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao

    2014-01-01

    An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.

  3. a Comprehensive Review of Pansharpening Algorithms for GÖKTÜRK-2 Satellite Images

    NASA Astrophysics Data System (ADS)

    Kahraman, S.; Ertürk, A.

    2017-11-01

    In this paper, a comprehensive review and performance evaluation of pansharpening algorithms for GÖKTÜRK-2 images is presented. GÖKTÜRK-2 is the first high resolution remote sensing satellite of Turkey which was designed and built in Turkey, by The Ministry of Defence, TUBITAK-UZAY and Turkish Aerospace Industry (TUSAŞ) collectively. GÖKTÜRK-2 was launched at 18th. December 2012 in Jinguan, China and provides 2.5 meter panchromatic (PAN) and 5 meter multispectral (MS) spatial resolution satellite images. In this study, a large number of pansharpening algorithms are implemented and evaluated for performance on multiple GÖKTÜRK-2 satellite images. Quality assessments are conducted both qualitatively through visual results and quantitatively using Root Mean Square Error (RMSE), Correlation Coefficient (CC), Spectral Angle Mapper (SAM), Erreur Relative Globale Adimensionnelle de Synthése (ERGAS), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Universal Image Quality Index (UIQI).

  4. A new method of Quickbird own image fusion

    NASA Astrophysics Data System (ADS)

    Han, Ying; Jiang, Hong; Zhang, Xiuying

    2009-10-01

    With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.

  5. Assessing the effectiveness of Landsat 8 chlorophyll a retrieval algorithms for regional freshwater monitoring.

    PubMed

    Boucher, Jonah; Weathers, Kathleen C; Norouzi, Hamid; Steele, Bethel

    2018-06-01

    Predicting algal blooms has become a priority for scientists, municipalities, businesses, and citizens. Remote sensing offers solutions to the spatial and temporal challenges facing existing lake research and monitoring programs that rely primarily on high-investment, in situ measurements. Techniques to remotely measure chlorophyll a (chl a) as a proxy for algal biomass have been limited to specific large water bodies in particular seasons and narrow chl a ranges. Thus, a first step toward prediction of algal blooms is generating regionally robust algorithms using in situ and remote sensing data. This study explores the relationship between in-lake measured chl a data from Maine and New Hampshire, USA lakes and remotely sensed chl a retrieval algorithm outputs. Landsat 8 images were obtained and then processed after required atmospheric and radiometric corrections. Six previously developed algorithms were tested on a regional scale on 11 scenes from 2013 to 2015 covering 192 lakes. The best performing algorithm across data from both states had a 0.16 correlation coefficient (R 2 ) and P ≤ 0.05 when Landsat 8 images within 5 d, and improved to R 2 of 0.25 when data from Maine only were used. The strength of the correlation varied with the specificity of the time window in relation to the in-situ sampling date, explaining up to 27% of the variation in the data across several scenes. Two previously published algorithms using Landsat 8's Bands 1-4 were best correlated with chl a, and for particular late-summer scenes, they accounted for up to 69% of the variation in in-situ measurements. A sensitivity analysis revealed that a longer time difference between in situ measurements and the satellite image increased uncertainty in the models, and an effect of the time of year on several indices was demonstrated. A regional model based on the best performing remote sensing algorithm was developed and was validated using independent in situ measurements and satellite images. These results suggest that, despite challenges including seasonal effects and low chl a thresholds, remote sensing could be an effective and accessible regional-scale tool for chl a monitoring programs in lakes. © 2018 The Authors. Ecological Applications published by Wiley Periodicals, Inc. on behalf of Ecological Society of America.

  6. A Novel Artificial Bee Colony Algorithm Based on Internal-Feedback Strategy for Image Template Matching

    PubMed Central

    Gong, Li-Gang

    2014-01-01

    Image template matching refers to the technique of locating a given reference image over a source image such that they are the most similar. It is a fundamental mission in the field of visual target recognition. In general, there are two critical aspects of a template matching scheme. One is similarity measurement and the other is best-match location search. In this work, we choose the well-known normalized cross correlation model as a similarity criterion. The searching procedure for the best-match location is carried out through an internal-feedback artificial bee colony (IF-ABC) algorithm. IF-ABC algorithm is highlighted by its effort to fight against premature convergence. This purpose is achieved through discarding the conventional roulette selection procedure in the ABC algorithm so as to provide each employed bee an equal chance to be followed by the onlooker bees in the local search phase. Besides that, we also suggest efficiently utilizing the internal convergence states as feedback guidance for searching intensity in the subsequent cycles of iteration. We have investigated four ideal template matching cases as well as four actual cases using different searching algorithms. Our simulation results show that the IF-ABC algorithm is more effective and robust for this template matching mission than the conventional ABC and two state-of-the-art modified ABC algorithms do. PMID:24892107

  7. Multispectral fluorescence imaging for detection of bovine feces on Romaine lettuce and baby spinach leaves

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral fluorescence imaging with ultraviolet-A excitation was used to evaluate the feasibility of two-waveband fluorescence algorithms for the detection of bovine fecal contaminants on the abaxial and adaxial surfaces of Romaine lettuce and baby spinach leaves. Correlation analysis was used t...

  8. Super-resolution for imagery from integrated microgrid polarimeters.

    PubMed

    Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M

    2011-07-04

    Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.

  9. Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain

    PubMed Central

    Osechinskiy, Sergey; Kruggel, Frithjof

    2011-01-01

    Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290

  10. Image matching for digital close-range stereo photogrammetry based on constraints of Delaunay triangulated network and epipolar-line

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Sheng, Y. H.; Li, Y. Q.; Han, B.; Liang, Ch.; Sha, W.

    2006-10-01

    In the field of digital photogrammetry and computer vision, the determination of conjugate points in a stereo image pair, referred to as "image matching," is the critical step to realize automatic surveying and recognition. Traditional matching methods encounter some problems in the digital close-range stereo photogrammetry, because the change of gray-scale or texture is not obvious in the close-range stereo images. The main shortcoming of traditional matching methods is that geometric information of matching points is not fully used, which will lead to wrong matching results in regions with poor texture. To fully use the geometry and gray-scale information, a new stereo image matching algorithm is proposed in this paper considering the characteristics of digital close-range photogrammetry. Compared with the traditional matching method, the new algorithm has three improvements on image matching. Firstly, shape factor, fuzzy maths and gray-scale projection are introduced into the design of synthetical matching measure. Secondly, the topology connecting relations of matching points in Delaunay triangulated network and epipolar-line are used to decide matching order and narrow the searching scope of conjugate point of the matching point. Lastly, the theory of parameter adjustment with constraint is introduced into least square image matching to carry out subpixel level matching under epipolar-line constraint. The new algorithm is applied to actual stereo images of a building taken by digital close-range photogrammetric system. The experimental result shows that the algorithm has a higher matching speed and matching accuracy than pyramid image matching algorithm based on gray-scale correlation.

  11. An Automatic Assessment System of Diabetic Foot Ulcers Based on Wound Area Determination, Color Segmentation, and Healing Score Evaluation.

    PubMed

    Wang, Lei; Pedersen, Peder C; Strong, Diane M; Tulu, Bengisu; Agu, Emmanuel; Ignotz, Ron; He, Qian

    2015-08-07

    For individuals with type 2 diabetes, foot ulcers represent a significant health issue. The aim of this study is to design and evaluate a wound assessment system to help wound clinics assess patients with foot ulcers in a way that complements their current visual examination and manual measurements of their foot ulcers. The physical components of the system consist of an image capture box, a smartphone for wound image capture and a laptop for analyzing the wound image. The wound image assessment algorithms calculate the overall wound area, color segmented wound areas, and a healing score, to provide a quantitative assessment of the wound healing status both for a single wound image and comparisons of subsequent images to an initial wound image. The system was evaluated by assessing foot ulcers for 12 patients in the Wound Clinic at University of Massachusetts Medical School. As performance measures, the Matthews correlation coefficient (MCC) value for the wound area determination algorithm tested on 32 foot ulcer images was .68. The clinical validity of our healing score algorithm relative to the experienced clinicians was measured by Krippendorff's alpha coefficient (KAC) and ranged from .42 to .81. Our system provides a promising real-time method for wound assessment based on image analysis. Clinical comparisons indicate that the optimized mean-shift-based algorithm is well suited for wound area determination. Clinical evaluation of our healing score algorithm shows its potential to provide clinicians with a quantitative method for evaluating wound healing status. © 2015 Diabetes Technology Society.

  12. Quantitative 3D high resolution transmission ultrasound tomography: creating clinically relevant images (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wiskin, James; Klock, John; Iuanow, Elaine; Borup, Dave T.; Terry, Robin; Malik, Bilal H.; Lenox, Mark

    2017-03-01

    There has been a great deal of research into ultrasound tomography for breast imaging over the past 35 years. Few successful attempts have been made to reconstruct high-resolution images using transmission ultrasound. To this end, advances have been made in 2D and 3D algorithms that utilize either time of arrival or full wave data to reconstruct images with high spatial and contrast resolution suitable for clinical interpretation. The highest resolution and quantitative accuracy result from inverse scattering applied to full wave data in 3D. However, this has been prohibitively computationally expensive, meaning that full inverse scattering ultrasound tomography has not been considered clinically viable. Here we show the results of applying a nonlinear inverse scattering algorithm to 3D data in a clinically useful time frame. This method yields Quantitative Transmission (QT) ultrasound images with high spatial and contrast resolution. We reconstruct sound speeds for various 2D and 3D phantoms and verify these values with independent measurements. The data are fully 3D as is the reconstruction algorithm, with no 2D approximations. We show that 2D reconstruction algorithms can introduce artifacts into the QT breast image which are avoided by using a full 3D algorithm and data. We show high resolution gross and microscopic anatomic correlations comparing cadaveric breast QT images with MRI to establish imaging capability and accuracy. Finally, we show reconstructions of data from volunteers, as well as an objective visual grading analysis to confirm clinical imaging capability and accuracy.

  13. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.

  14. FPGA implementation of Santos-Victor optical flow algorithm for real-time image processing: an useful attempt

    NASA Astrophysics Data System (ADS)

    Cobos Arribas, Pedro; Monasterio Huelin Macia, Felix

    2003-04-01

    A FPGA based hardware implementation of the Santos-Victor optical flow algorithm, useful in robot guidance applications, is described in this paper. The system used to do contains an ALTERA FPGA (20K100), an interface with a digital camera, three VRAM memories to contain the data input and some output memories (a VRAM and a EDO) to contain the results. The system have been used previously to develop and test other vision algorithms, such as image compression, optical flow calculation with differential and correlation methods. The designed system let connect the digital camera, or the FPGA output (results of algorithms) to a PC, throw its Firewire or USB port. The problems take place in this occasion have motivated to adopt another hardware structure for certain vision algorithms with special requirements, that need a very hard code intensive processing.

  15. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  16. Parallel-Processing Software for Correlating Stereo Images

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric

    2007-01-01

    A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.

  17. A Generic and Efficient E-field Parallel Imaging Correlator for Next-Generation Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Thyagarajan, Nithyanandan; Beardsley, Adam P.; Bowman, Judd D.; Morales, Miguel F.

    2017-05-01

    Modern radio telescopes are favouring densely packed array layouts with large numbers of antennas (NA ≳ 1000). Since the complexity of traditional correlators scales as O(N_A^2), there will be a steep cost for realizing the full imaging potential of these powerful instruments. Through our generic and efficient E-field Parallel Imaging Correlator (epic), we present the first software demonstration of a generalized direct imaging algorithm, namely the Modular Optimal Frequency Fourier imager. Not only does it bring down the cost for dense layouts to O(N_A log _2N_A) but can also image from irregular layouts and heterogeneous arrays of antennas. epic is highly modular, parallelizable, implemented in object-oriented python, and publicly available. We have verified the images produced to be equivalent to those from traditional techniques to within a precision set by gridding coarseness. We have also validated our implementation on data observed with the Long Wavelength Array (LWA1). We provide a detailed framework for imaging with heterogeneous arrays and show that epic robustly estimates the input sky model for such arrays. Antenna layouts with dense filling factors consisting of a large number of antennas such as LWA, the Square Kilometre Array, Hydrogen Epoch of Reionization Array, and Canadian Hydrogen Intensity Mapping Experiment will gain significant computational advantage by deploying an optimized version of epic. The algorithm is a strong candidate for instruments targeting transient searches of fast radio bursts as well as planetary and exoplanetary phenomena due to the availability of high-speed calibrated time-domain images and low output bandwidth relative to visibility-based systems.

  18. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features

    NASA Astrophysics Data System (ADS)

    Srinivasan, Yeshwanth; Hernes, Dana; Tulpule, Bhakti; Yang, Shuyu; Guo, Jiangling; Mitra, Sunanda; Yagneswaran, Sriraja; Nutter, Brian; Jeronimo, Jose; Phillips, Benny; Long, Rodney; Ferris, Daron

    2005-04-01

    Automated segmentation and classification of diagnostic markers in medical imagery are challenging tasks. Numerous algorithms for segmentation and classification based on statistical approaches of varying complexity are found in the literature. However, the design of an efficient and automated algorithm for precise classification of desired diagnostic markers is extremely image-specific. The National Library of Medicine (NLM), in collaboration with the National Cancer Institute (NCI), is creating an archive of 60,000 digitized color images of the uterine cervix. NLM is developing tools for the analysis and dissemination of these images over the Web for the study of visual features correlated with precancerous neoplasia and cancer. To enable indexing of images of the cervix, it is essential to develop algorithms for the segmentation of regions of interest, such as acetowhitened regions, and automatic identification and classification of regions exhibiting mosaicism and punctation. Success of such algorithms depends, primarily, on the selection of relevant features representing the region of interest. We present color and geometric features based statistical classification and segmentation algorithms yielding excellent identification of the regions of interest. The distinct classification of the mosaic regions from the non-mosaic ones has been obtained by clustering multiple geometric and color features of the segmented sections using various morphological and statistical approaches. Such automated classification methodologies will facilitate content-based image retrieval from the digital archive of uterine cervix and have the potential of developing an image based screening tool for cervical cancer.

  19. STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission

    NASA Astrophysics Data System (ADS)

    Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.

    2018-05-01

    STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.

  20. Automated measurement of pressure injury through image processing.

    PubMed

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.

  1. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  2. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    NASA Astrophysics Data System (ADS)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video-mosaics and the corresponding histology in 32 lesions. Peri-operative RCM imaging shows promise for improved and faster detection of cancer margins and guiding MMS in the surgical setting.

  3. Translational-circular scanning for magneto-acoustic tomography with current injection.

    PubMed

    Wang, Shigang; Ma, Ren; Zhang, Shunqi; Yin, Tao; Liu, Zhipeng

    2016-01-27

    Magneto-acoustic tomography with current injection involves using electrical impedance imaging technology. To explore the potential applications in imaging biological tissue and enhance image quality, a new scan mode for the transducer is proposed that is based on translational and circular scanning to record acoustic signals from sources. An imaging algorithm to analyze these signals is developed in respect to this alternative scanning scheme. Numerical simulations and physical experiments were conducted to evaluate the effectiveness of this scheme. An experiment using a graphite sheet as a tissue-mimicking phantom medium was conducted to verify simulation results. A pulsed voltage signal was applied across the sample, and acoustic signals were recorded as the transducer performed stepped translational or circular scans. The imaging algorithm was used to obtain an acoustic-source image based on the signals. In simulations, the acoustic-source image is correlated with the conductivity at the sample boundaries of the sample, but image results change depending on distance and angular aspect of the transducer. In general, as angle and distance decreases, the image quality improves. Moreover, experimental data confirmed the correlation. The acoustic-source images resulting from the alternative scanning mode has yielded the outline of a phantom medium. This scan mode enables improvements to be made in the sensitivity of the detecting unit and a change to a transducer array that would improve the efficiency and accuracy of acoustic-source images.

  4. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.

    PubMed

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.

  5. Revised motion estimation algorithm for PROPELLER MRI.

    PubMed

    Pipe, James G; Gibbs, Wende N; Li, Zhiqiang; Karis, John P; Schar, Michael; Zwart, Nicholas R

    2014-08-01

    To introduce a new algorithm for estimating data shifts (used for both rotation and translation estimates) for motion-corrected PROPELLER MRI. The method estimates shifts for all blades jointly, emphasizing blade-pair correlations that are both strong and more robust to noise. The heads of three volunteers were scanned using a PROPELLER acquisition while they exhibited various amounts of motion. All data were reconstructed twice, using motion estimates from the original and new algorithm. Two radiologists independently and blindly compared 216 image pairs from these scans, ranking the left image as substantially better or worse than, slightly better or worse than, or equivalent to the right image. In the aggregate of 432 scores, the new method was judged substantially better than the old method 11 times, and was never judged substantially worse. The new algorithm compared favorably with the old in its ability to estimate bulk motion in a limited study of volunteer motion. A larger study of patients is planned for future work. Copyright © 2013 Wiley Periodicals, Inc.

  6. Methods and algorithms for optical coherence tomography-based angiography: a review and comparison

    NASA Astrophysics Data System (ADS)

    Zhang, Anqi; Zhang, Qinqin; Chen, Chieh-Li; Wang, Ruikang K.

    2015-10-01

    Optical coherence tomography (OCT)-based angiography is increasingly becoming a clinically useful and important imaging technique due to its ability to provide volumetric microvascular networks innervating tissue beds in vivo without a need for exogenous contrast agent. Numerous OCT angiography algorithms have recently been proposed for the purpose of contrasting microvascular networks. A general literature review is provided on the recent progress of OCT angiography methods and algorithms. The basic physics and mathematics behind each method together with its contrast mechanism are described. Potential directions for future technical development of OCT based angiography is then briefly discussed. Finally, by the use of clinical data captured from normal and pathological subjects, the imaging performance of vascular networks delivered by the most recently reported algorithms is evaluated and compared, including optical microangiography, speckle variance, phase variance, split-spectrum amplitude decorrelation angiography, and correlation mapping. It is found that the method that utilizes complex OCT signal to contrast retinal blood flow delivers the best performance among all the algorithms in terms of image contrast and vessel connectivity. The purpose of this review is to help readers understand and select appropriate OCT angiography algorithm for use in specific applications.

  7. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  8. Comparison of 3-D multi-lag cross- correlation and speckle brightness aberration correction algorithms on static and moving targets.

    PubMed

    Ivancevich, Nikolas M; Dahl, Jeremy J; Smith, Stephen W

    2009-10-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively.

  9. Polymeric endovascular strut and lumen detection algorithm for intracoronary optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Amrute, Junedh M.; Athanasiou, Lambros S.; Rikhtegar, Farhad; de la Torre Hernández, José M.; Camarero, Tamara García; Edelman, Elazer R.

    2018-03-01

    Polymeric endovascular implants are the next step in minimally invasive vascular interventions. As an alternative to traditional metallic drug-eluting stents, these often-erodible scaffolds present opportunities and challenges for patients and clinicians. Theoretically, as they resorb and are absorbed over time, they obviate the long-term complications of permanent implants, but in the short-term visualization and therefore positioning is problematic. Polymeric scaffolds can only be fully imaged using optical coherence tomography (OCT) imaging-they are relatively invisible via angiography-and segmentation of polymeric struts in OCT images is performed manually, a laborious and intractable procedure for large datasets. Traditional lumen detection methods using implant struts as boundary limits fail in images with polymeric implants. Therefore, it is necessary to develop an automated method to detect polymeric struts and luminal borders in OCT images; we present such a fully automated algorithm. Accuracy was validated using expert annotations on 1140 OCT images with a positive predictive value of 0.93 for strut detection and an R2 correlation coefficient of 0.94 between detected and expert-annotated lumen areas. The proposed algorithm allows for rapid, accurate, and automated detection of polymeric struts and the luminal border in OCT images.

  10. Color image analysis technique for measuring of fat in meat: an application for the meat industry

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla

    2001-04-01

    Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.

  11. Robust visual tracking based on deep convolutional neural networks and kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Yang, Hua; Zhong, Donghong; Liu, Chenyi; Song, Kaiyou; Yin, Zhouping

    2018-03-01

    Object tracking is still a challenging problem in computer vision, as it entails learning an effective model to account for appearance changes caused by occlusion, out of view, plane rotation, scale change, and background clutter. This paper proposes a robust visual tracking algorithm called deep convolutional neural network (DCNNCT) to simultaneously address these challenges. The proposed DCNNCT algorithm utilizes a DCNN to extract the image feature of a tracked target, and the full range of information regarding each convolutional layer is used to express the image feature. Subsequently, the kernelized correlation filters (CF) in each convolutional layer are adaptively learned, the correlation response maps of that are combined to estimate the location of the tracked target. To avoid the case of tracking failure, an online random ferns classifier is employed to redetect the tracked target, and a dual-threshold scheme is used to obtain the final target location by comparing the tracking result with the detection result. Finally, the change in scale of the target is determined by building scale pyramids and training a CF. Extensive experiments demonstrate that the proposed algorithm is effective at tracking, especially when evaluated using an index called the overlap rate. The DCNNCT algorithm is also highly competitive in terms of robustness with respect to state-of-the-art trackers in various challenging scenarios.

  12. Novel multimodality segmentation using level sets and Jensen-Rényi divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markel, Daniel, E-mail: daniel.markel@mail.mcgill.ca; Zaidi, Habib; Geneva Neuroscience Center, Geneva University, CH-1205 Geneva

    2013-12-15

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. Methods: A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set activemore » contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. Results: The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with aR{sup 2} value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. Conclusions: The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.« less

  13. Novel multimodality segmentation using level sets and Jensen-Rényi divergence.

    PubMed

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-12-01

    Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy (IGART). This work presents a novel multimodality segmentation algorithm using the Jensen-Rényi divergence (JRD) to evolve the geometric level set contour. The algorithm offers improved noise tolerance which is particularly applicable to segmentation of regions found in PET and cone-beam computed tomography. A steepest gradient ascent optimization method is used in conjunction with the JRD and a level set active contour to iteratively evolve a contour to partition an image based on statistical divergence of the intensity histograms. The algorithm is evaluated using PET scans of pharyngolaryngeal squamous cell carcinoma with the corresponding histological reference. The multimodality extension of the algorithm is evaluated using 22 PET/CT scans of patients with lung carcinoma and a physical phantom scanned under varying image quality conditions. The average concordance index (CI) of the JRD segmentation of the PET images was 0.56 with an average classification error of 65%. The segmentation of the lung carcinoma images had a maximum diameter relative error of 63%, 19.5%, and 14.8% when using CT, PET, and combined PET/CT images, respectively. The estimated maximal diameters of the gross tumor volume (GTV) showed a high correlation with the macroscopically determined maximal diameters, with a R(2) value of 0.85 and 0.88 using the PET and PET/CT images, respectively. Results from the physical phantom show that the JRD is more robust to image noise compared to mutual information and region growing. The JRD has shown improved noise tolerance compared to mutual information for the purpose of PET image segmentation. Presented is a flexible framework for multimodal image segmentation that can incorporate a large number of inputs efficiently for IGART.

  14. Integrating concept ontology and multitask learning to achieve more effective classifier training for multilevel image annotation.

    PubMed

    Fan, Jianping; Gao, Yuli; Luo, Hangzai

    2008-03-01

    In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.

  15. SU-E-J-104: Evaluation of Accuracy for Various Deformable Image Registrations with Virtual Deformation QA Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, S; Kim, K; Kim, M

    Purpose: The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiation treatment planning. We evaluated accuracy of various DIR algorithms using virtual deformation QA software (ImSimQA, Oncology System Limited, UK). Methods: The reference image (Iref) and volume (Vref) was first generated with IMSIMQA software. We deformed Iref with axial movement of deformation point and Vref depending on the type of deformation that are the deformation1 is to increase the Vref (relaxation) and the deformation 2 is to decrease the Vref (contraction) .The deformed image (Idef) and volume (Vdef) were inversely deformed to Iref and Vref usingmore » DIR algorithms. As a Result, we acquired deformed image (Iid) and volume (Vid). The DIR algorithms were optical flow (HS, IOF) and demons (MD, FD) of the DIRART. The image similarity evaluation between Iref and Iid was calculated by Normalized Mutual Information (NMI) and Normalized Cross Correlation (NCC). The value of Dice Similarity Coefficient (DSC) was used for evaluation of volume similarity. Results: When moving distance of deformation point was 4 mm, the value of NMI was above 1.81 and NCC was above 0.99 in all DIR algorithms. Since the degree of deformation was increased, the degree of image similarity was decreased. When the Vref increased or decreased about 12%, the difference between Vref and Vid was within ±5% regardless of the type of deformation. The value of DSC was above 0.95 in deformation1 except for the MD algorithm. In case of deformation 2, that of DSC was above 0.95 in all DIR algorithms. Conclusion: The Idef and Vdef have not been completely restored to Iref and Vref and the accuracy of DIR algorithms was different depending on the degree of deformation. Hence, the performance of DIR algorithms should be verified for the desired applications.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, W; Miften, M; Jones, B

    Purpose: Pancreatic SBRT relies on extremely accurate delivery of ablative radiation doses to the target, and intra-fractional tracking of fiducial markers can facilitate improvements in dose delivery. However, this requires algorithms that are able to find fiducial markers with high speed and accuracy. The purpose of this study was to develop a novel marker tracking algorithm that is robust against many of the common errors seen with traditional template matching techniques. Methods: Using CBCT projection images, a method was developed to create detailed template images of fiducial marker clusters without prior knowledge of the number of markers, their positions, ormore » their orientations. Briefly, the method (i) enhances markers in projection images, (ii) stabilizes the cluster’s position, (iii) reconstructs the cluster in 3D, and (iv) precomputes a set of static template images dependent on gantry angle. Furthermore, breathing data were used to produce 4D reconstructions of clusters, yielding dynamic template images dependent on gantry angle and breathing amplitude. To test these two approaches, static and dynamic templates were used to track the motion of marker clusters in more than 66,000 projection images from 75 CBCT scans of 15 pancreatic SBRT patients. Results: For both static and dynamic templates, the new technique was able to locate marker clusters present in projection images 100% of the time. The algorithm was also able to correctly locate markers in several instances where only some of the markers were visible due to insufficient field-of-view. In cases where clusters exhibited deformation and/or rotation during breathing, dynamic templates resulted in cross-correlation scores up to 70% higher than static templates. Conclusion: Patient-specific templates provided complete tracking of fiducial marker clusters in CBCT scans, and dynamic templates helped to provide higher cross-correlation scores for deforming/rotating clusters. This novel algorithm provides an extremely accurate method to detect fiducial markers during treatment. Research funding provided by Varian Medical Systems to Miften and Jones.« less

  17. Imaging through turbulence using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-09-01

    Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.

  18. New clinical grading scales and objective measurement for conjunctival injection.

    PubMed

    Park, In Ki; Chun, Yeoun Sook; Kim, Kwang Gi; Yang, Hee Kyung; Hwang, Jeong-Min

    2013-08-05

    To establish a new clinical grading scale and objective measurement method to evaluate conjunctival injection. Photographs of conjunctival injection with variable ocular diseases in 429 eyes were reviewed. Seventy-three images with concordance by three ophthalmologists were classified into a 4-step and 10-step subjective grading scale, and used as standard photographs. Each image was quantified in four ways: the relative magnitude of the redness component of each red-green-blue (RGB) pixel; two different algorithms based on the occupied area by blood vessels (K-means clustering with LAB color model and contrast-limited adaptive histogram equalization [CLAHE] algorithm); and the presence of blood vessel edges, based on the Canny edge-detection algorithm. Area under the receiver operating characteristic curves (AUCs) were calculated to summarize diagnostic accuracies of the four algorithms. The RGB color model, K-means clustering with LAB color model, and CLAHE algorithm showed good correlation with the clinical 10-step grading scale (R = 0.741, 0.784, 0.919, respectively) and with the clinical 4-step grading scale (R = 0.645, 0.702, 0.838, respectively). The CLAHE method showed the largest AUC, best distinction power (P < 0.001, ANOVA, Bonferroni multiple comparison test), and high reproducibility (R = 0.996). CLAHE algorithm showed the best correlation with the 10-step and 4-step subjective clinical grading scales together with high distinction power and reproducibility. CLAHE algorithm can be a useful for method for assessment of conjunctival injection.

  19. Field test comparison of an autocorrelation technique for determining grain size using a digital 'beachball' camera versus traditional methods

    USGS Publications Warehouse

    Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.

    2007-01-01

    This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.

  20. Normalized gradient fields cross-correlation for automated detection of prostate in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Fotin, Sergei V.; Yin, Yin; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter L.

    2012-02-01

    Fully automated prostate segmentation helps to address several problems in prostate cancer diagnosis and treatment: it can assist in objective evaluation of multiparametric MR imagery, provides a prostate contour for MR-ultrasound (or CT) image fusion for computer-assisted image-guided biopsy or therapy planning, may facilitate reporting and enables direct prostate volume calculation. Among the challenges in automated analysis of MR images of the prostate are the variations of overall image intensities across scanners, the presence of nonuniform multiplicative bias field within scans and differences in acquisition setup. Furthermore, images acquired with the presence of an endorectal coil suffer from localized high-intensity artifacts at the posterior part of the prostate. In this work, a three-dimensional method for fast automated prostate detection based on normalized gradient fields cross-correlation, insensitive to intensity variations and coil-induced artifacts, is presented and evaluated. The components of the method, offline template learning and the localization algorithm, are described in detail. The method was validated on a dataset of 522 T2-weighted MR images acquired at the National Cancer Institute, USA that was split in two halves for development and testing. In addition, second dataset of 29 MR exams from Centre d'Imagerie Médicale Tourville, France were used to test the algorithm. The 95% confidence intervals for the mean Euclidean distance between automatically and manually identified prostate centroids were 4.06 +/- 0.33 mm and 3.10 +/- 0.43 mm for the first and second test datasets respectively. Moreover, the algorithm provided the centroid within the true prostate volume in 100% of images from both datasets. Obtained results demonstrate high utility of the detection method for a fully automated prostate segmentation.

  1. Diagnosing cysts with correlation coefficient images from 2-dimensional freehand elastography.

    PubMed

    Booi, Rebecca C; Carson, Paul L; O'Donnell, Matthew; Richards, Michael S; Rubin, Jonathan M

    2007-09-01

    We compared the diagnostic potential of using correlation coefficient images versus elastograms from 2-dimensional (2D) freehand elastography to characterize breast cysts. In this preliminary study, which was approved by the Institutional Review Board and compliant with the Health Insurance Portability and Accountability Act, we imaged 4 consecutive human subjects (4 cysts, 1 biopsy-verified benign breast parenchyma) with freehand 2D elastography. Data were processed offline with conventional 2D phase-sensitive speckle-tracking algorithms. The correlation coefficient in the cyst and surrounding tissue was calculated, and appearances of the cysts in the correlation coefficient images and elastograms were compared. The correlation coefficient in the cysts was considerably lower (14%-37%) than in the surrounding tissue because of the lack of sufficient speckle in the cysts, as well as the prominence of random noise, reverberations, and clutter, which decorrelated quickly. Thus, the cysts were visible in all correlation coefficient images. In contrast, the elastograms associated with these cysts each had different elastographic patterns. The solid mass in this study did not have the same high decorrelation rate as the cysts, having a correlation coefficient only 2.1% lower than that of surrounding tissue. Correlation coefficient images may produce a more direct, reliable, and consistent method for characterizing cysts than elastograms.

  2. Progress in SPECT/CT imaging of prostate cancer.

    PubMed

    Seo, Youngho; Franc, Benjamin L; Hawkins, Randall A; Wong, Kenneth H; Hasegawa, Bruce H

    2006-08-01

    Prostate cancer is the most common type of cancer (other than skin cancer) among men in the United States. Although prostate cancer is one of the few cancers that grow so slowly that it may never threaten the lives of some patients, it can be lethal once metastasized. Indium-111 capromab pendetide (ProstaScint, Cytogen Corporation, Princeton, NJ) imaging is indicated for staging and recurrence detection of the disease, and is particularly useful to determine whether or not the disease has spread to distant metastatic sites. However, the interpretation of 111In-capromab pendetide is challenging without correlated structural information mostly because the radiopharmaceutical demonstrates nonspecific uptake in the normal vasculature, bowel, bone marrow, and the prostate gland. We developed an improved method of imaging and localizing 111In-Capromab pendetide using a SPECT/CT imaging system. The specific goals included: i) development and application of a novel iterative SPECT reconstruction algorithm that utilizes a priori information from coregistered CT; and ii) assessment of clinical impact of adding SPECT/CT for prostate cancer imaging with capromab pendetide utilizing the standard and novel reconstruction techniques. Patient imaging studies with capromab pendetide were performed from 1999 to 2004 using two different SPECT/CT scanners, a prototype SPECT/CT system and a commercial SPECT/CT system (Discovery VH, GE Healthcare, Waukesha, WI). SPECT projection data from both systems were reconstructed using an experimental iterative algorithm that compensates for both photon attenuation and collimator blurring. In addition, the data obtained from the commercial system were reconstructed with attenuation correction using an OSEM reconstruction supplied by the camera manufacturer for routine clinical interpretation. For 12 sets of patient data, SPECT images reconstructed using the experimental algorithm were interpreted separately and compared with interpretation of images obtained using the standard reconstruction technique. The experimental reconstruction algorithm improved spatial resolution, reduced streak artifacts, and yielded a better correlation with anatomic details of CT in comparison to conventional reconstruction methods (e.g., filtered back-projection or OSEM with attenuation correction only). Images produced with the experimental algorithm produced a subjective improvement in the confidence of interpretation for 11 of 12 studies. There were also changes in interpretations for 4 of 12 studies although the changes were not sufficient to alter prognosis or the patient treatment plan.

  3. Experimental confirmation of long-memory correlations in star-wander data.

    PubMed

    Zunino, Luciano; Gulich, Damián; Funes, Gustavo; Ziad, Aziz

    2014-07-01

    In this Letter we have analyzed the temporal correlations of the angle-of-arrival fluctuations of stellar images. Experimentally measured data were carefully examined by implementing multifractal detrended fluctuation analysis. This algorithm is able to discriminate the presence of fractal and multifractal structures in recorded time sequences. We have confirmed that turbulence-degraded stellar wavefronts are compatible with a long-memory correlated monofractal process. This experimental result is quite significant for the accurate comprehension and modeling of the atmospheric turbulence effects on the stellar images. It can also be of great utility within the adaptive optics field.

  4. New Directions in the Digital Signal Processing of Image Data.

    DTIC Science & Technology

    1987-05-01

    and identify by block number) FIELD GROUP SUB-GROUP Object detection and idLntification 12 01 restoration of photon noise limited imagery 15 04 image...from incomplete information, restoration of blurred images in additive and multiplicative noise , motion analysis with fast hierarchical algorithms...different resolutions. As is well known, the solution to the matched filter problem under additive white noise conditions is the correlation receiver

  5. Accurate and robust brain image alignment using boundary-based registration.

    PubMed

    Greve, Douglas N; Fischl, Bruce

    2009-10-15

    The fine spatial scales of the structures in the human brain represent an enormous challenge to the successful integration of information from different images for both within- and between-subject analysis. While many algorithms to register image pairs from the same subject exist, visual inspection shows that their accuracy and robustness to be suspect, particularly when there are strong intensity gradients and/or only part of the brain is imaged. This paper introduces a new algorithm called Boundary-Based Registration, or BBR. The novelty of BBR is that it treats the two images very differently. The reference image must be of sufficient resolution and quality to extract surfaces that separate tissue types. The input image is then aligned to the reference by maximizing the intensity gradient across tissue boundaries. Several lower quality images can be aligned through their alignment with the reference. Visual inspection and fMRI results show that BBR is more accurate than correlation ratio or normalized mutual information and is considerably more robust to even strong intensity inhomogeneities. BBR also excels at aligning partial-brain images to whole-brain images, a domain in which existing registration algorithms frequently fail. Even in the limit of registering a single slice, we show the BBR results to be robust and accurate.

  6. Guaranteeing Failsafe Operation of Extended-Scene Shack-Hartmann Wavefront Sensor Algorithm

    NASA Technical Reports Server (NTRS)

    Sidick, Erikin

    2009-01-01

    A Shack-Hartmann sensor (SHS) is an optical instrument consisting of a lenslet array and a camera. It is widely used for wavefront sensing in optical testing and astronomical adaptive optics. The camera is placed at the focal point of the lenslet array and points at a star or any other point source. The image captured is an array of spot images. When the wavefront error at the lenslet array changes, the position of each spot measurably shifts from its original position. Determining the shifts of the spot images from their reference points shows the extent of the wavefront error. An adaptive cross-correlation (ACC) algorithm has been developed to use scenes as well as point sources for wavefront error detection. Qualifying an extended scene image is often not an easy task due to changing conditions in scene content, illumination level, background, Poisson noise, read-out noise, dark current, sampling format, and field of view. The proposed new technique based on ACC algorithm analyzes the effects of these conditions on the performance of the ACC algorithm and determines the viability of an extended scene image. If it is viable, then it can be used for error correction; if it is not, the image fails and will not be further processed. By potentially testing for a wide variety of conditions, the algorithm s accuracy can be virtually guaranteed. In a typical application, the ACC algorithm finds image shifts of more than 500 Shack-Hartmann camera sub-images relative to a reference sub -image or cell when performing one wavefront sensing iteration. In the proposed new technique, a pair of test and reference cells is selected from the same frame, preferably from two well-separated locations. The test cell is shifted by an integer number of pixels, say, for example, from m= -5 to 5 along the x-direction by choosing a different area on the same sub-image, and the shifts are estimated using the ACC algorithm. The same is done in the y-direction. If the resulting shift estimate errors are less than a pre-determined threshold (e.g., 0.03 pixel), the image is accepted. Otherwise, it is rejected.

  7. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  8. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  9. Real-time out-of-plane artifact subtraction tomosynthesis imaging using prior CT for scanning beam digital x-ray system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Meng, E-mail: mengwu@stanford.edu; Fahrig, Rebecca

    2014-11-01

    Purpose: The scanning beam digital x-ray system (SBDX) is an inverse geometry fluoroscopic system with high dose efficiency and the ability to perform continuous real-time tomosynthesis in multiple planes. This system could be used for image guidance during lung nodule biopsy. However, the reconstructed images suffer from strong out-of-plane artifact due to the small tomographic angle of the system. Methods: The authors propose an out-of-plane artifact subtraction tomosynthesis (OPAST) algorithm that utilizes a prior CT volume to augment the run-time image processing. A blur-and-add (BAA) analytical model, derived from the project-to-backproject physical model, permits the generation of tomosynthesis images thatmore » are a good approximation to the shift-and-add (SAA) reconstructed image. A computationally practical algorithm is proposed to simulate images and out-of-plane artifacts from patient-specific prior CT volumes using the BAA model. A 3D image registration algorithm to align the simulated and reconstructed images is described. The accuracy of the BAA analytical model and the OPAST algorithm was evaluated using three lung cancer patients’ CT data. The OPAST and image registration algorithms were also tested with added nonrigid respiratory motions. Results: Image similarity measurements, including the correlation coefficient, mean squared error, and structural similarity index, indicated that the BAA model is very accurate in simulating the SAA images from the prior CT for the SBDX system. The shift-variant effect of the BAA model can be ignored when the shifts between SBDX images and CT volumes are within ±10 mm in the x and y directions. The nodule visibility and depth resolution are improved by subtracting simulated artifacts from the reconstructions. The image registration and OPAST are robust in the presence of added respiratory motions. The dominant artifacts in the subtraction images are caused by the mismatches between the real object and the prior CT volume. Conclusions: Their proposed prior CT-augmented OPAST reconstruction algorithm improves lung nodule visibility and depth resolution for the SBDX system.« less

  10. Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering

    NASA Astrophysics Data System (ADS)

    Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier

    2012-01-01

    We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.

  11. Three-dimensional image authentication scheme using sparse phase information in double random phase encoded integral imaging.

    PubMed

    Yi, Faliu; Jeoung, Yousun; Moon, Inkyu

    2017-05-20

    In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.

  12. A novel structure-aware sparse learning algorithm for brain imaging genetics.

    PubMed

    Du, Lei; Jingwen, Yan; Kim, Sungeun; Risacher, Shannon L; Huang, Heng; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2014-01-01

    Brain imaging genetics is an emergent research field where the association between genetic variations such as single nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is evaluated. Sparse canonical correlation analysis (SCCA) is a bi-multivariate analysis method that has the potential to reveal complex multi-SNP-multi-QT associations. Most existing SCCA algorithms are designed using the soft threshold strategy, which assumes that the features in the data are independent from each other. This independence assumption usually does not hold in imaging genetic data, and thus inevitably limits the capability of yielding optimal solutions. We propose a novel structure-aware SCCA (denoted as S2CCA) algorithm to not only eliminate the independence assumption for the input data, but also incorporate group-like structure in the model. Empirical comparison with a widely used SCCA implementation, on both simulated and real imaging genetic data, demonstrated that S2CCA could yield improved prediction performance and biologically meaningful findings.

  13. Real-time PM10 concentration monitoring on Penang Bridge by using traffic monitoring CCTV

    NASA Astrophysics Data System (ADS)

    Low, K. L.; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Wong, C. J.

    2007-04-01

    For this study, an algorithm was developed to determine concentration of particles less than 10μm (PM10) from still images captured by a CCTV camera on the Penang Bridge. The objective of this study is to remotely monitor the PM10 concentrations on the Penang Bridge through the internet. So, an algorithm was developed based on the relationship between the atmospheric reflectance and the corresponding air quality. By doing this, the still images were separated into three bands namely red, green and blue and their digital number values were determined. A special transformation was then performed to the data. Ground PM10 measurements were taken by using DustTrak TM meter. The algorithm was calibrated using a regression analysis. The proposed algorithm produced a high correlation coefficient (R) and low root-mean-square error (RMS) between the measured and produced PM10. Later, a program was written by using Microsoft Visual Basic 6.0 to download still images from the camera over the internet and implement the newly developed algorithm. Meanwhile, the program is running in real time and the public will know the air pollution index from time to time. This indicates that the technique using the CCTV camera images can provide a useful tool for air quality studies.

  14. On the mode I fracture analysis of cracked Brazilian disc using a digital image correlation method

    NASA Astrophysics Data System (ADS)

    Abshirini, Mohammad; Soltani, Nasser; Marashizadeh, Parisa

    2016-03-01

    Mode I of fracture of centrally cracked Brazilian disc was investigated experimentally using a digital image correlation (DIC) method. Experiments were performed on PMMA polymers subjected to diametric-compression load. The displacement fields were determined by a correlation between the reference and the deformed images captured before and during loading. The stress intensity factors were calculated by displacement fields using William's equation and the least square algorithm. The parameters involved in the accuracy of SIF calculation such as number of terms in William's equation and the region of analysis around the crack were discussed. The DIC results were compared with the numerical results available in literature and a very good agreement between them was observed. By extending the tests up to the critical state, mode I fracture toughness was determined by analyzing the image of specimen captured at the moment before fracture. The results showed that the digital image correlation was a reliable technique for the calculation of the fracture toughness of brittle materials.

  15. An intermediate significant bit (ISB) watermarking technique using neural networks.

    PubMed

    Zeki, Akram; Abubakar, Adamu; Chiroma, Haruna

    2016-01-01

    Prior research studies have shown that the peak signal to noise ratio (PSNR) is the most frequent watermarked image quality metric that is used for determining the levels of strength and weakness of watermarking algorithms. Conversely, normalised cross correlation (NCC) is the most common metric used after attacks were applied to a watermarked image to verify the strength of the algorithm used. Many researchers have used these approaches to evaluate their algorithms. These strategies have been used for a long time, however, which unfortunately limits the value of PSNR and NCC in reflecting the strength and weakness of the watermarking algorithms. This paper considers this issue to determine the threshold values of these two parameters in reflecting the amount of strength and weakness of the watermarking algorithms. We used our novel watermarking technique for embedding four watermarks in intermediate significant bits (ISB) of six image files one-by-one through replacing the image pixels with new pixels and, at the same time, keeping the new pixels very close to the original pixels. This approach gains an improved robustness based on the PSNR and NCC values that were gathered. A neural network model was built that uses the image quality metrics (PSNR and NCC) values obtained from the watermarking of six grey-scale images that use ISB as the desired output and that are trained for each watermarked image's PSNR and NCC. The neural network predicts the watermarked image's PSNR together with NCC after the attacks when a portion of the output of the same or different types of image quality metrics (PSNR and NCC) are obtained. The results indicate that the NCC metric fluctuates before the PSNR values deteriorate.

  16. High speed stereovision setup for position and motion estimation of fertilizer particles leaving a centrifugal spreader.

    PubMed

    Hijazi, Bilal; Cool, Simon; Vangeyte, Jürgen; Mertens, Koen C; Cointault, Frédéric; Paindavoine, Michel; Pieters, Jan G

    2014-11-13

    A 3D imaging technique using a high speed binocular stereovision system was developed in combination with corresponding image processing algorithms for accurate determination of the parameters of particles leaving the spinning disks of centrifugal fertilizer spreaders. Validation of the stereo-matching algorithm using a virtual 3D stereovision simulator indicated an error of less than 2 pixels for 90% of the particles. The setup was validated using the cylindrical spread pattern of an experimental spreader. A 2D correlation coefficient of 90% and a Relative Error of 27% was found between the experimental results and the (simulated) spread pattern obtained with the developed setup. In combination with a ballistic flight model, the developed image acquisition and processing algorithms can enable fast determination and evaluation of the spread pattern which can be used as a tool for spreader design and precise machine calibration.

  17. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  18. Edge enhancement and noise suppression for infrared image based on feature analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  19. Adaptive multiple super fast simulated annealing for stochastic microstructure reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Seun; Lin, Guang; Sun, Xin

    2013-01-01

    Fast image reconstruction from statistical information is critical in image fusion from multimodality chemical imaging instrumentation to create high resolution image with large domain. Stochastic methods have been used widely in image reconstruction from two point correlation function. The main challenge is to increase the efficiency of reconstruction. A novel simulated annealing method is proposed for fast solution of image reconstruction. Combining the advantage of very fast cooling schedules, dynamic adaption and parallelization, the new simulation annealing algorithm increases the efficiencies by several orders of magnitude, making the large domain image fusion feasible.

  20. Correction of eddy current distortions in high angular resolution diffusion imaging.

    PubMed

    Zhuang, Jiancheng; Lu, Zhong-Lin; Vidal, Christine Bouteiller; Damasio, Hanna

    2013-06-01

    To correct distortions caused by eddy currents induced by large diffusion gradients during high angular resolution diffusion imaging without any auxiliary reference scans. Image distortion parameters were obtained by image coregistration, performed only between diffusion-weighted images with close diffusion gradient orientations. A linear model that describes distortion parameters (translation, scale, and shear) as a function of diffusion gradient directions was numerically computed to allow individualized distortion correction for every diffusion-weighted image. The assumptions of the algorithm were successfully verified in a series of experiments on phantom and human scans. Application of the proposed algorithm in high angular resolution diffusion images markedly reduced eddy current distortions when compared to results obtained with previously published methods. The method can correct eddy current artifacts in the high angular resolution diffusion images, and it avoids the problematic procedure of cross-correlating images with significantly different contrasts resulting from very different gradient orientations or strengths. Copyright © 2012 Wiley Periodicals, Inc.

  1. Automated detection of Martian water ice clouds: the Valles Marineris

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu

    2016-10-01

    We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds

  2. A novel color image encryption algorithm based on genetic recombination and the four-dimensional memristive hyperchaotic system

    NASA Astrophysics Data System (ADS)

    Chai, Xiu-Li; Gan, Zhi-Hua; Lu, Yang; Zhang, Miao-Hui; Chen, Yi-Ran

    2016-10-01

    Recently, many image encryption algorithms based on chaos have been proposed. Most of the previous algorithms encrypt components R, G, and B of color images independently and neglect the high correlation between them. In the paper, a novel color image encryption algorithm is introduced. The 24 bit planes of components R, G, and B of the color plain image are obtained and recombined into 4 compound bit planes, and this can make the three components affect each other. A four-dimensional (4D) memristive hyperchaotic system generates the pseudorandom key streams and its initial values come from the SHA 256 hash value of the color plain image. The compound bit planes and key streams are confused according to the principles of genetic recombination, then confusion and diffusion as a union are applied to the bit planes, and the color cipher image is obtained. Experimental results and security analyses demonstrate that the proposed algorithm is secure and effective so that it may be adopted for secure communication. Project supported by the National Natural Science Foundation of China (Grant Nos. 61203094 and 61305042), the Natural Science Foundation of the United States (Grant Nos. CNS-1253424 and ECCS-1202225), the Science and Technology Foundation of Henan Province, China (Grant No. 152102210048), the Foundation and Frontier Project of Henan Province, China (Grant No. 162300410196), the Natural Science Foundation of Educational Committee of Henan Province, China (Grant No. 14A413015), and the Research Foundation of Henan University, China (Grant No. xxjc20140006).

  3. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  4. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  5. A new approach to pre-processing digital image for wavelet-based watermark

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  6. Comparative performance analysis of cervix ROI extraction and specular reflection removal algorithms for uterine cervix image analysis

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Jeronimo, Jose; Thoma, George R.

    2007-03-01

    Cervicography is a technique for visual screening of uterine cervix images for cervical cancer. One of our research goals is the automated detection in these images of acetowhite (AW) lesions, which are sometimes correlated with cervical cancer. These lesions are characterized by the whitening of regions along the squamocolumnar junction on the cervix when treated with 5% acetic acid. Image preprocessing is required prior to invoking AW detection algorithms on cervicographic images for two reasons: (1) to remove Specular Reflections (SR) caused by camera flash, and (2) to isolate the cervix region-of-interest (ROI) from image regions that are irrelevant to the analysis. These image regions may contain medical instruments, film markup, or other non-cervix anatomy or regions, such as vaginal walls. We have qualitatively and quantitatively evaluated the performance of alternative preprocessing algorithms on a test set of 120 images. For cervix ROI detection, all approaches use a common feature set, but with varying combinations of feature weights, normalization, and clustering methods. For SR detection, while one approach uses a Gaussian Mixture Model on an intensity/saturation feature set, a second approach uses Otsu thresholding on a top-hat transformed input image. Empirical results are analyzed to derive conclusions on the performance of each approach.

  7. The study on the parallel processing based time series correlation analysis of RBC membrane flickering in quantitative phase imaging

    NASA Astrophysics Data System (ADS)

    Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag

    2017-02-01

    Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.

  8. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  9. High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.

    PubMed

    Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S

    2018-03-05

    A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

  10. Nonequilibrium fluctuations in metaphase spindles: polarized light microscopy, image registration, and correlation functions

    NASA Astrophysics Data System (ADS)

    Brugués, Jan; Needleman, Daniel J.

    2010-02-01

    Metaphase spindles are highly dynamic, nonequilibrium, steady-state structures. We study the internal fluctuations of spindles by computing spatio-temporal correlation functions of movies obtained from quantitative polarized light microscopy. These correlation functions are only physically meaningful if corrections are made for the net motion of the spindle. We describe our image registration algorithm in detail and we explore its robustness. Finally, we discuss the expression used for the estimation of the correlation function in terms of the nematic order of the microtubules which make up the spindle. Ultimately, studying the form of these correlation functions will provide a quantitative test of the validity of coarse-grained models of spindle structure inspired from liquid crystal physics.

  11. Localization of the transverse processes in ultrasound for spinal curvature measurement

    NASA Astrophysics Data System (ADS)

    Kamali, Shahrokh; Ungi, Tamas; Lasso, Andras; Yan, Christina; Lougheed, Matthew; Fichtinger, Gabor

    2017-03-01

    PURPOSE: In scoliosis monitoring, tracked ultrasound has been explored as a safer imaging alternative to traditional radiography. The use of ultrasound in spinal curvature measurement requires identification of vertebral landmarks such as transverse processes, but as bones have reduced visibility in ultrasound imaging, skeletal landmarks are typically segmented manually, which is an exceedingly laborious and long process. We propose an automatic algorithm to segment and localize the surface of bony areas in the transverse process for scoliosis in ultrasound. METHODS: The algorithm uses cascade of filters to remove low intensity pixels, smooth the image and detect bony edges. By applying first differentiation, candidate bony areas are classified. The average intensity under each area has a correlation with the possibility of a shadow, and areas with strong shadow are kept for bone segmentation. The segmented images are used to reconstruct a 3-D volume to represent the whole spinal structure around the transverse processes. RESULTS: A comparison between the manual ground truth segmentation and the automatic algorithm in 50 images showed 0.17 mm average difference. The time to process all 1,938 images was about 37 Sec. (0.0191 Sec. / Image), including reading the original sequence file. CONCLUSION: Initial experiments showed the algorithm to be sufficiently accurate and fast for segmentation transverse processes in ultrasound for spinal curvature measurement. An extensive evaluation of the method is currently underway on images from a larger patient cohort and using multiple observers in producing ground truth segmentation.

  12. Interband coding extension of the new lossless JPEG standard

    NASA Astrophysics Data System (ADS)

    Memon, Nasir D.; Wu, Xiaolin; Sippy, V.; Miller, G.

    1997-01-01

    Due to the perceived inadequacy of current standards for lossless image compression, the JPEG committee of the International Standards Organization (ISO) has been developing a new standard. A baseline algorithm, called JPEG-LS, has already been completed and is awaiting approval by national bodies. The JPEG-LS baseline algorithm despite being simple is surprisingly efficient, and provides compression performance that is within a few percent of the best and more sophisticated techniques reported in the literature. Extensive experimentations performed by the authors seem to indicate that an overall improvement by more than 10 percent in compression performance will be difficult to obtain even at the cost of great complexity; at least not with traditional approaches to lossless image compression. However, if we allow inter-band decorrelation and modeling in the baseline algorithm, nearly 30 percent improvement in compression gains for specific images in the test set become possible with a modest computational cost. In this paper we propose and investigate a few techniques for exploiting inter-band correlations in multi-band images. These techniques have been designed within the framework of the baseline algorithm, and require minimal changes to the basic architecture of the baseline, retaining its essential simplicity.

  13. Blind source separation of ex-vivo aorta tissue multispectral images

    PubMed Central

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-01-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366

  14. Automated characterization of perceptual quality of clinical chest radiographs: Validation and calibration to observer preference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, Ehsan, E-mail: samei@duke.edu; Lin, Yuan; Choudhury, Kingshuk R.

    Purpose: The authors previously proposed an image-based technique [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] to assess the perceptual quality of clinical chest radiographs. In this study, an observer study was designed and conducted to validate the output of the program against rankings by expert radiologists and to establish the ranges of the output values that reflect the acceptable image appearance so the program output can be used for image quality optimization and tracking. Methods: Using an IRB-approved protocol, 2500 clinical chest radiographs (PA/AP) were collected from our clinical operation. The images were processed through our perceptual qualitymore » assessment program to measure their appearance in terms of ten metrics of perceptual image quality: lung gray level, lung detail, lung noise, rib–lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm–lung contrast, and subdiaphragm area. From the results, for each targeted appearance attribute/metric, 18 images were selected such that the images presented a relatively constant appearance with respect to all metrics except the targeted one. The images were then incorporated into a graphical user interface, which displayed them into three panels of six in a random order. Using a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions, each of five participating attending chest radiologists was tasked to spatially order the images based only on the targeted appearance attribute regardless of the other qualities. Once ordered, the observer also indicated the range of image appearances that he/she considered clinically acceptable. The observer data were analyzed in terms of the correlations between the observer and algorithmic rankings and interobserver variability. An observer-averaged acceptable image appearance was also statistically derived for each quality attribute based on the collected individual acceptable ranges. Results: The observer study indicated that, for each image quality attribute, the averaged observer ranking strongly correlated with the algorithmic ranking (linear correlation coefficient R > 0.92), with highest correlation (R = 1) for lung gray level and the lowest (R = 0.92) for mediastinum noise. There was a strong concordance between the observers in terms of their rankings (i.e., Kendall’s tau agreement > 0.84). The observers also generally indicated similar tolerance and preference levels in terms of acceptable ranges, as 85% of the values were close to the overall tolerance or preference levels and the differences were smaller than 0.15. Conclusions: The observer study indicates that the previously proposed technique provides a robust reflection of the perceptual image quality in clinical images. The results established the range of algorithmic outputs for each metric that can be used to quantitatively assess and qualify the appearance quality of clinical chest radiographs.« less

  15. Gradient-Modulated SWIFT

    PubMed Central

    Zhang, Jinjin; Idiyatullin, Djaudat; Corum, Curtis A.; Kobayashi, Naoharu; Garwood, Michael

    2017-01-01

    Purpose Methods designed to image fast-relaxing spins, such as sweep imaging with Fourier transformation (SWIFT), often utilize high excitation bandwidth and duty cycle, and in some applications the optimal flip angle cannot be used without exceeding safe specific absorption rate (SAR) levels. The aim is to reduce SAR and increase the flexibility of SWIFT by applying time-varying gradient-modulation (GM). The modified sequence is called GM-SWIFT. Theory and Methods The method known as gradient-modulated offset independent adiabaticity was used to modulate the radiofrequency (RF) pulse and gradients. An expanded correlation algorithm was developed for GM-SWIFT to correct the phase and scale effects. Simulations and phantom and in vivo human experiments were performed to verify the correlation algorithm and to evaluate imaging performance. Results GM-SWIFT reduces SAR, RF amplitude, and acquisition time by up to 90%, 70%, and 45%, respectively, while maintaining image quality. The choice of GM parameter influences the lower limit of short T2* sensitivity, which can be exploited to suppress unwanted image haze from unresolvable ultrashort T2* signals originating from plastic materials in the coil housing and fixatives. Conclusions GM-SWIFT reduces peak and total RF power requirements and provides additional flexibility for optimizing SAR, RF amplitude, scan time, and image quality. PMID:25800547

  16. Spectral CT Reconstruction with Image Sparsity and Spectral Mean

    PubMed Central

    Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu

    2017-01-01

    Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267

  17. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  18. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  19. Tractography Verified by Intraoperative Magnetic Resonance Imaging and Subcortical Stimulation During Tumor Resection Near the Corticospinal Tract.

    PubMed

    Münnich, Timo; Klein, Jan; Hattingen, Elke; Noack, Anika; Herrmann, Eva; Seifert, Volker; Senft, Christian; Forster, Marie-Therese

    2018-04-14

    Tractography is a popular tool for visualizing the corticospinal tract (CST). However, results may be influenced by numerous variables, eg, the selection of seeding regions of interests (ROIs) or the chosen tracking algorithm. To compare different variable sets by correlating tractography results with intraoperative subcortical stimulation of the CST, correcting intraoperative brain shift by the use of intraoperative MRI. Seeding ROIs were created by means of motor cortex segmentation, functional MRI (fMRI), and navigated transcranial magnetic stimulation (nTMS). Based on these ROIs, tractography was run for each patient using a deterministic and a probabilistic algorithm. Tractographies were processed on pre- and postoperatively acquired data. Using a linear mixed effects statistical model, best correlation between subcortical stimulation intensity and the distance between tractography and stimulation sites was achieved by using the segmented motor cortex as seeding ROI and applying the probabilistic algorithm on preoperatively acquired imaging sequences. Tractographies based on fMRI or nTMS results differed very little, but with enlargement of positive nTMS sites the stimulation-distance correlation of nTMS-based tractography improved. Our results underline that the use of tractography demands for careful interpretation of its virtual results by considering all influencing variables.

  20. Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.

    PubMed

    Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias

    2018-05-01

    In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.

  1. Adaptive spectral filtering of PIV cross correlations

    NASA Astrophysics Data System (ADS)

    Giarra, Matthew; Vlachos, Pavlos; Aether Lab Team

    2016-11-01

    Using cross correlations (CCs) in particle image velocimetry (PIV) assumes that tracer particles in interrogation regions (IRs) move with the same velocity. But this assumption is nearly always violated because real flows exhibit velocity gradients, which degrade the signal-to-noise ratio (SNR) of the CC and are a major driver of error in PIV. Iterative methods help reduce these errors, but even they can fail when gradients are large within individual IRs. We present an algorithm to mitigate the effects of velocity gradients on PIV measurements. Our algorithm is based on a model of the CC, which predicts a relationship between the PDF of particle displacements and the variation of the correlation's SNR across the Fourier spectrum. We give an algorithm to measure this SNR from the CC, and use this insight to create a filter that suppresses the low-SNR portions of the spectrum. Our algorithm extends to the ensemble correlation, where it accelerates the convergence of the measurement and also reveals the PDF of displacements of the ensemble (and therefore of statistical metrics like diffusion coefficient). Finally, our model provides theoretical foundations for a number of "rules of thumb" in PIV, like the quarter-window rule.

  2. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  3. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  4. Determination of cloud liquid water content using the SSM/I

    NASA Technical Reports Server (NTRS)

    Alishouse, John C.; Snider, Jack B.; Westwater, Ed R.; Swift, Calvin T.; Ruf, Christopher S.

    1990-01-01

    As part of a calibration/validation effort for the special sensor microwave/imager (SSM/I), coincident observations of SSM/I brightness temperatures and surface-based observations of cloud liquid water were obtained. These observations were used to validate initial algorithms and to derive an improved algorithm. The initial algorithms were divided into latitudinal-, seasonal-, and surface-type zones. It was found that these initial algorithms, which were of the D-matrix type, did not yield sufficiently accurate results. The surface-based measurements of channels were investigated; however, the 85V channel was excluded because of excessive noise. It was found that there is no significant correlation between the SSM/I brightness temperatures and the surface-based cloud liquid water determination when the background surface is land or snow. A high correlation was found between brightness temperatures and ground-based measurements over the ocean.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stassi, D.; Ma, H.; Schmidt, T. G., E-mail: taly.gilat-schmidt@marquette.edu

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, makingmore » it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. Results: There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. Conclusions: The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.« less

  6. A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map

    NASA Astrophysics Data System (ADS)

    Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad

    2016-06-01

    In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.

  7. Machine learning based detection of age-related macular degeneration (AMD) and diabetic macular edema (DME) from optical coherence tomography (OCT) images

    PubMed Central

    Wang, Yu; Zhang, Yaonan; Yao, Zhaomin; Zhao, Ruixue; Zhou, Fengfeng

    2016-01-01

    Non-lethal macular diseases greatly impact patients’ life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples. PMID:28018716

  8. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    PubMed

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.

  9. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de; Sawall, Stefan; Kachelrieß, Marc

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levelsmore » from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the best performance. At 50 mGy, the deviation from the reference obtained at 500 mGy were less than 4%. Also the LDPC algorithm provides reasonable results with deviation less than 10% at 50 mGy while PCF and MKB reconstruction show larger deviations even at higher dose levels. Conclusions: LDPC and HDTV increase CNR and allow for quantitative evaluations even at dose levels as low as 50 mGy. The left ventricular volumes exemplarily illustrate that cardiac parameters can be accurately estimated at lowest dose levels if sophisticated algorithms are used. This allows to reduce dose by a factor of 10 compared to today's gold standard and opens new options for longitudinal studies of the heart.« less

  10. A novel iris patterns matching algorithm of weighted polar frequency correlation

    NASA Astrophysics Data System (ADS)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  11. Image correlation method for DNA sequence alignment.

    PubMed

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  12. Design of a cathodoluminescence image generator using a Raspberry Pi coupled to a scanning electron microscope.

    PubMed

    Benítez, Alfredo; Santiago, Ulises; Sanchez, John E; Ponce, Arturo

    2018-01-01

    In this work, an innovative cathodoluminescence (CL) system is coupled to a scanning electron microscope and synchronized with a Raspberry Pi computer integrated with an innovative processing signal. The post-processing signal is based on a Python algorithm that correlates the CL and secondary electron (SE) images with a precise dwell time correction. For CL imaging, the emission signal is collected through an optical fiber and transduced to an electrical signal via a photomultiplier tube (PMT). CL Images are registered in a panchromatic mode and can be filtered using a monochromator connected between the optical fiber and the PMT to produce monochromatic CL images. The designed system has been employed to study ZnO samples prepared by electrical arc discharge and microwave methods. CL images are compared with SE images and chemical elemental mapping images to correlate the emission regions of the sample.

  13. Design of a cathodoluminescence image generator using a Raspberry Pi coupled to a scanning electron microscope

    NASA Astrophysics Data System (ADS)

    Benítez, Alfredo; Santiago, Ulises; Sanchez, John E.; Ponce, Arturo

    2018-01-01

    In this work, an innovative cathodoluminescence (CL) system is coupled to a scanning electron microscope and synchronized with a Raspberry Pi computer integrated with an innovative processing signal. The post-processing signal is based on a Python algorithm that correlates the CL and secondary electron (SE) images with a precise dwell time correction. For CL imaging, the emission signal is collected through an optical fiber and transduced to an electrical signal via a photomultiplier tube (PMT). CL Images are registered in a panchromatic mode and can be filtered using a monochromator connected between the optical fiber and the PMT to produce monochromatic CL images. The designed system has been employed to study ZnO samples prepared by electrical arc discharge and microwave methods. CL images are compared with SE images and chemical elemental mapping images to correlate the emission regions of the sample.

  14. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  15. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  16. A Markov model for blind image separation by a mean-field EM algorithm.

    PubMed

    Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele

    2006-02-01

    This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

  17. Enhanced truncated-correlation photothermal coherence tomography with application to deep subsurface defect imaging and 3-dimensional reconstructions

    NASA Astrophysics Data System (ADS)

    Tavakolian, Pantea; Sivagurunathan, Koneswaran; Mandelis, Andreas

    2017-07-01

    Photothermal diffusion-wave imaging is a promising technique for non-destructive evaluation and medical applications. Several diffusion-wave techniques have been developed to produce depth-resolved planar images of solids and to overcome imaging depth and image blurring limitations imposed by the physics of parabolic diffusion waves. Truncated-Correlation Photothermal Coherence Tomography (TC-PCT) is the most successful class of these methodologies to-date providing 3-D subsurface visualization with maximum depth penetration and high axial and lateral resolution. To extend the depth range and axial and lateral resolution, an in-depth analysis of TC-PCT, a novel imaging system with improved instrumentation, and an optimized reconstruction algorithm over the original TC-PCT technique is developed. Thermal waves produced by a laser chirped pulsed heat source in a finite thickness solid and the image reconstruction algorithm are investigated from the theoretical point of view. 3-D visualization of subsurface defects utilizing the new TC-PCT system is reported. The results demonstrate that this method is able to detect subsurface defects at the depth range of ˜4 mm in a steel sample, which exhibits dynamic range improvement by a factor of 2.6 compared to the original TC-PCT. This depth does not represent the upper limit of the enhanced TC-PCT. Lateral resolution in the steel sample was measured to be ˜31 μm.

  18. An improved algorithm to reduce noise in high-order thermal ghost imaging.

    PubMed

    Chen, Xi-Hao; Wu, Shuang-Shuang; Wu, Wei; Guo, Wang-Yuan; Meng, Shao-Ying; Sun, Zhi-Bin; Zhai, Guang-Jie; Li, Ming-Fei; Wu, Ling-An

    2014-09-01

    A modified Nth-order correlation function is derived that can effectively remove the noise background encountered in high-order thermal light ghost imaging (GI). Based on this, the quality of the reconstructed images in an Nth-order lensless GI setup has been greatly enhanced compared to former high-order schemes for the same sampling number. In addition, the dependence of the visibility and signal-to-noise ratio for different high-order images on the sampling number has been measured and compared.

  19. Text Extraction from Scene Images by Character Appearance and Structure Modeling

    PubMed Central

    Yi, Chucai; Tian, Yingli

    2012-01-01

    In this paper, we propose a novel algorithm to detect text information from natural scene images. Scene text classification and detection are still open research topics. Our proposed algorithm is able to model both character appearance and structure to generate representative and discriminative text descriptors. The contributions of this paper include three aspects: 1) a new character appearance model by a structure correlation algorithm which extracts discriminative appearance features from detected interest points of character samples; 2) a new text descriptor based on structons and correlatons, which model character structure by structure differences among character samples and structure component co-occurrence; and 3) a new text region localization method by combining color decomposition, character contour refinement, and string line alignment to localize character candidates and refine detected text regions. We perform three groups of experiments to evaluate the effectiveness of our proposed algorithm, including text classification, text detection, and character identification. The evaluation results on benchmark datasets demonstrate that our algorithm achieves the state-of-the-art performance on scene text classification and detection, and significantly outperforms the existing algorithms for character identification. PMID:23316111

  20. Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology.

    PubMed

    Niazi, Muhammad Khalid Khan; Abas, Fazly Salleh; Senaras, Caglar; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Opfer, John; Hasserjian, Robert; Louissaint, Abner; Shana'ah, Arwa; Lozanski, Gerard; Gurcan, Metin N

    2018-01-01

    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.

  1. The ANACONDA algorithm for deformable image registration in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weistrand, Ola; Svensson, Stina, E-mail: stina.svensson@raysearchlabs.com

    2015-01-15

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularizationmore » term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA performs well in comparison with other algorithms. By including CT/CBCT data in the validation, the various aspects of the algorithm such as its ability to handle different modalities, large deformations, and air pockets are shown.« less

  2. Adaptation of reference volumes for correlation-based digital holographic particle tracking

    NASA Astrophysics Data System (ADS)

    Hesseling, Christina; Peinke, Joachim; Gülker, Gerd

    2018-04-01

    Numerically reconstructed reference volumes tailored to particle images are used for particle position detection by means of three-dimensional correlation. After a first tracking of these positions, the experimentally recorded particle images are retrieved as a posteriori knowledge about the particle images in the system. This knowledge is used for a further refinement of the detected positions. A transparent description of the individual algorithm steps including the results retrieved with experimental data complete the paper. The work employs extraordinarily small particles, smaller than the pixel pitch of the camera sensor. It is the first approach known to the authors that combines numerical knowledge about particle images and particle images retrieved from the experimental system to an iterative particle tracking approach for digital holographic particle tracking velocimetry.

  3. Development and Demonstration of a Networked Telepathology 3-D Imaging, Databasing, and Communication System

    DTIC Science & Technology

    1996-10-01

    aligned using an octree search algorithm combined with cross correlation analysis . Successive 4x downsampling with optional and specifiable neighborhood...desired and the search engine embedded in the OODBMS will find the requested imagery and que it to the user for further analysis . This application was...obtained during Hoftmann-LaRoche production pathology imaging performed at UMICH. Versant works well and is easy to use; 3) Pathology Image Analysis

  4. Investigation of four-dimensional computed tomography-based pulmonary ventilation imaging in patients with emphysematous lung regions

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tokihiro; Kabus, Sven; Klinder, Tobias; Lorenz, Cristian; von Berg, Jens; Blaffert, Thomas; Loo, Billy W., Jr.; Keall, Paul J.

    2011-04-01

    A pulmonary ventilation imaging technique based on four-dimensional (4D) computed tomography (CT) has advantages over existing techniques. However, physiologically accurate 4D-CT ventilation imaging has not been achieved in patients. The purpose of this study was to evaluate 4D-CT ventilation imaging by correlating ventilation with emphysema. Emphysematous lung regions are less ventilated and can be used as surrogates for low ventilation. We tested the hypothesis: 4D-CT ventilation in emphysematous lung regions is significantly lower than in non-emphysematous regions. Four-dimensional CT ventilation images were created for 12 patients with emphysematous lung regions as observed on CT, using a total of four combinations of two deformable image registration (DIR) algorithms: surface-based (DIRsur) and volumetric (DIRvol), and two metrics: Hounsfield unit (HU) change (VHU) and Jacobian determinant of deformation (VJac), yielding four ventilation image sets per patient. Emphysematous lung regions were detected by density masking. We tested our hypothesis using the one-tailed t-test. Visually, different DIR algorithms and metrics yielded spatially variant 4D-CT ventilation images. The mean ventilation values in emphysematous lung regions were consistently lower than in non-emphysematous regions for all the combinations of DIR algorithms and metrics. VHU resulted in statistically significant differences for both DIRsur (0.14 ± 0.14 versus 0.29 ± 0.16, p = 0.01) and DIRvol (0.13 ± 0.13 versus 0.27 ± 0.15, p < 0.01). However, VJac resulted in non-significant differences for both DIRsur (0.15 ± 0.07 versus 0.17 ± 0.08, p = 0.20) and DIRvol (0.17 ± 0.08 versus 0.19 ± 0.09, p = 0.30). This study demonstrated the strong correlation between the HU-based 4D-CT ventilation and emphysema, which indicates the potential for HU-based 4D-CT ventilation imaging to achieve high physiologic accuracy. A further study is needed to confirm these results.

  5. Speckle correlation resolution enhancement of wide-field fluorescence imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yilmaz, Hasan

    2016-03-01

    Structured illumination enables high-resolution fluorescence imaging of nanostructures [1]. We demonstrate a new high-resolution fluorescence imaging method that uses a scattering layer with a high-index substrate as a solid immersion lens [2]. Random scattering of coherent light enables a speckle pattern with a very fine structure that illuminates the fluorescent nanospheres on the back surface of the high-index substrate. The speckle pattern is raster-scanned over the fluorescent nanospheres using a speckle correlation effect known as the optical memory effect. A series of standard-resolution fluorescence images per each speckle pattern displacement are recorded by an electron-multiplying CCD camera using a commercial microscope objective. We have developed a new phase-retrieval algorithm to reconstruct a high-resolution, wide-field image from several standard-resolution wide-field images. We have introduced phase information of Fourier components of standard-resolution images as a new constraint in our algorithm which discards ambiguities therefore ensures convergence to a unique solution. We demonstrate two-dimensional fluorescence images of a collection of nanospheres with a deconvolved Abbe resolution of 116 nm and a field of view of 10 µm × 10 µm. Our method is robust against optical aberrations and stage drifts, therefore excellent for imaging nanostructures under ambient conditions. [1] M. G. L. Gustafsson, J. Microsc. 198, 82-87 (2000). [2] H. Yilmaz, E. G. van Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, Optica 2, 424-429 (2015).

  6. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs.

    PubMed

    Zheng, Yu; Yang, Yang; Chen, Wu

    2017-06-25

    In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  7. Incoherent coincidence imaging of space objects

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Gu, Guohua

    2016-10-01

    Incoherent Coincidence Imaging (ICI), which is based on the second or higher order correlation of fluctuating light field, has provided great potentialities with respect to standard conventional imaging. However, the deployment of reference arm limits its practical applications in the detection of space objects. In this article, an optical aperture synthesis with electronically connected single-pixel photo-detectors was proposed to remove the reference arm. The correlation in our proposed method is the second order correlation between the intensity fluctuations observed by any two detectors. With appropriate locations of single-pixel detectors, this second order correlation is simplified to absolute-square Fourier transform of source and the unknown object. We demonstrate the image recovery with the Gerchberg-Saxton-like algorithms and investigate the reconstruction quality of our approach. Numerical experiments has been made to show that both binary and gray-scale objects can be recovered. This proposed method provides an effective approach to promote detection of space objects and perhaps even the exo-planets.

  8. Extended Scene SH Wavefront Sensor Algorithm: Minimization of Scene Content Dependent Shift Estimation Errors

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    Adaptive Periodic-Correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high-accuracy even when the sub-images in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift-estimate error of the APC algorithm has a component that depends on the content of extended-scene. In this paper we assess the amount of that error and propose a method to minimize it.

  9. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Task-specific image partitioning.

    PubMed

    Kim, Sungwoong; Nowozin, Sebastian; Kohli, Pushmeet; Yoo, Chang D

    2013-02-01

    Image partitioning is an important preprocessing step for many of the state-of-the-art algorithms used for performing high-level computer vision tasks. Typically, partitioning is conducted without regard to the task in hand. We propose a task-specific image partitioning framework to produce a region-based image representation that will lead to a higher task performance than that reached using any task-oblivious partitioning framework and existing supervised partitioning framework, albeit few in number. The proposed method partitions the image by means of correlation clustering, maximizing a linear discriminant function defined over a superpixel graph. The parameters of the discriminant function that define task-specific similarity/dissimilarity among superpixels are estimated based on structured support vector machine (S-SVM) using task-specific training data. The S-SVM learning leads to a better generalization ability while the construction of the superpixel graph used to define the discriminant function allows a rich set of features to be incorporated to improve discriminability and robustness. We evaluate the learned task-aware partitioning algorithms on three benchmark datasets. Results show that task-aware partitioning leads to better labeling performance than the partitioning computed by the state-of-the-art general-purpose and supervised partitioning algorithms. We believe that the task-specific image partitioning paradigm is widely applicable to improving performance in high-level image understanding tasks.

  11. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  12. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  13. Correlation between polarization sensitive optical coherence tomography and SHG microscopy in articular cartilage

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Ju, Myeong Jin; Huang, Lin; Tang, Shuo

    2017-02-01

    Polarization-sensitive optical coherence tomography (PS-OCT) and second harmonic generation (SHG) microscopy are two imaging modalities with different resolutions, field-of-views (FOV), and contrasts, while they both have the capability of imaging collagen fibers in biological tissues. PS-OCT can measure the tissue birefringence which is induced by highly organized fibers while SHG can image the collagen fiber organization with high resolution. Articular cartilage, with abundant structural collagen fibers, is a suitable sample to study the correlation between PS-OCT and SHG microscopy. Qualitative conjecture has been made that the phase retardation measured by PS-OCT is affected by the relationship between the collagen fiber orientation and the illumination direction. Anatomical studies show that the multilayered architecture of articular cartilage can be divided into four zones from its natural surface to the subchondral bone: the superficial zone, the middle zone, the deep zone, and the calcified zone. The different zones have different collagen fiber orientations, which can be studied by the different slopes in the cumulative phase retardation in PS-OCT. An algorithm is developed based on the quantitative analysis of PS-OCT phase retardation images to analyze the microstructural features in swine articular cartilage tissues. This algorithm utilizes the depth-dependent slope changing of phase retardation A-lines to segment structural layers. The results show good consistency with the knowledge of cartilage morphology and correlation with the SHG images measured at selected depth locations. The correlation between PS-OCT and SHG microscopy shows that PS-OCT has the potential to analyze both the macro and micro characteristics of biological tissues with abundant collagen fibers and other materials that may cause birefringence.

  14. Improving arrival time identification in transient elastography

    NASA Astrophysics Data System (ADS)

    Klein, Jens; McLaughlin, Joyce; Renzi, Daniel

    2012-04-01

    In this paper, we improve the first step in the arrival time algorithm used for shear wave speed recovery in transient elastography. In transient elastography, a shear wave is initiated at the boundary and the interior displacement of the propagating shear wave is imaged with an ultrasound ultra-fast imaging system. The first step in the arrival time algorithm finds the arrival times of the shear wave by cross correlating displacement time traces (the time history of the displacement at a single point) with a reference time trace located near the shear wave source. The second step finds the shear wave speed from the arrival times. In performing the first step, we observe that the wave pulse decorrelates as it travels through the medium, which leads to inaccurate estimates of the arrival times and ultimately to blurring and artifacts in the shear wave speed image. In particular, wave ‘spreading’ accounts for much of this decorrelation. Here we remove most of the decorrelation by allowing the reference wave pulse to spread during the cross correlation. This dramatically improves the images obtained from arrival time identification. We illustrate the improvement of this method on phantom and in vivo data obtained from the laboratory of Mathias Fink at ESPCI, Paris.

  15. A virtual phantom library for the quantification of deformable image registration uncertainties in patients with cancers of the head and neck.

    PubMed

    Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M

    2013-11-01

    Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual phantoms to show how they may be used to generate estimates of DIR uncertainty. The NCC showed that the simulated phantom images had greater similarity to the actual EOT images than the images from which they were derived, supporting the clinical relevance of the synthetic deformation maps. Calculation of the Jacobian of the "ground-truth" DVFs resulted in only positive values. As an example, mean error statistics are presented for all phantoms for the brainstem, cord, mandible, left parotid, and right parotid. It is essential that DIR algorithms be evaluated using a range of possible clinical scenarios for each treatment site. This work introduces a library of virtual phantoms intended to resemble real cases for interfraction head and neck DIR that may be used to estimate and compare the uncertainty of any DIR algorithm.

  16. Total focusing method with correlation processing of antenna array signals

    NASA Astrophysics Data System (ADS)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  17. Bit-level plane image encryption based on coupled map lattice with time-varying delay

    NASA Astrophysics Data System (ADS)

    Lv, Xiupin; Liao, Xiaofeng; Yang, Bo

    2018-04-01

    Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.

  18. Smartphone-based quantitative measurements on holographic sensors.

    PubMed

    Khalili Moghaddam, Gita; Lowe, Christopher Robin

    2017-01-01

    The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals.

  19. Smartphone-based quantitative measurements on holographic sensors

    PubMed Central

    Khalili Moghaddam, Gita

    2017-01-01

    The research reported herein integrates a generic holographic sensor platform and a smartphone-based colour quantification algorithm in order to standardise and improve the determination of the concentration of analytes of interest. The utility of this approach has been exemplified by analysing the replay colour of the captured image of a holographic pH sensor in near real-time. Personalised image encryption followed by a wavelet-based image compression method were applied to secure the image transfer across a bandwidth-limited network to the cloud. The decrypted and decompressed image was processed through four principal steps: Recognition of the hologram in the image with a complex background using a template-based approach, conversion of device-dependent RGB values to device-independent CIEXYZ values using a polynomial model of the camera and computation of the CIEL*a*b* values, use of the colour coordinates of the captured image to segment the image, select the appropriate colour descriptors and, ultimately, locate the region of interest (ROI), i.e. the hologram in this case, and finally, application of a machine learning-based algorithm to correlate the colour coordinates of the ROI to the analyte concentration. Integrating holographic sensors and the colour image processing algorithm potentially offers a cost-effective platform for the remote monitoring of analytes in real time in readily accessible body fluids by minimally trained individuals. PMID:29141008

  20. Imaging of cardiac perfusion of free-breathing small animals using dynamic phase-correlated micro-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawall, Stefan; Kuntz, Jan; Socher, Michaela

    Purpose:Mouse models of cardiac diseases have proven to be a valuable tool in preclinical research. The high cardiac and respiratory rates of free breathing mice prohibit conventional in vivo cardiac perfusion studies using computed tomography even if gating methods are applied. This makes a sacrification of the animals unavoidable and only allows for the application of ex vivo methods. Methods: To overcome this issue the authors propose a low dose scan protocol and an associated reconstruction algorithm that allows for in vivo imaging of cardiac perfusion and associated processes that are retrospectively synchronized to the respiratory and cardiac motion ofmore » the animal. The scan protocol consists of repetitive injections of contrast media within several consecutive scans while the ECG, respiratory motion, and timestamp of contrast injection are recorded and synchronized to the acquired projections. The iterative reconstruction algorithm employs a six-dimensional edge-preserving filter to provide low-noise, motion artifact-free images of the animal examined using the authors' low dose scan protocol. Results: The reconstructions obtained show that the complete temporal bolus evolution can be visualized and quantified in any desired combination of cardiac and respiratory phase including reperfusion phases. The proposed reconstruction method thereby keeps the administered radiation dose at a minimum and thus reduces metabolic inference to the animal allowing for longitudinal studies. Conclusions: The authors' low dose scan protocol and phase-correlated dynamic reconstruction algorithm allow for an easy and effective way to visualize phase-correlated perfusion processes in routine laboratory studies using free-breathing mice.« less

  1. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  2. Automated Radiology-Pathology Module Correlation Using a Novel Report Matching Algorithm by Organ System.

    PubMed

    Dane, Bari; Doshi, Ankur; Gfytopoulos, Soterios; Bhattacharji, Priya; Recht, Michael; Moore, William

    2018-05-01

    Radiology-pathology correlation is time-consuming and is not feasible in most clinical settings, with the notable exception of breast imaging. The purpose of this study was to determine if an automated radiology-pathology report pairing system could accurately match radiology and pathology reports, thus creating a feedback loop allowing for more frequent and timely radiology-pathology correlation. An experienced radiologist created a matching matrix of radiology and pathology reports. These matching rules were then exported to a novel comprehensive radiology-pathology module. All distinct radiology-pathology pairings at our institution from January 1, 2016 to July 1, 2016 were included (n = 8999). The appropriateness of each radiology-pathology report pairing was scored as either "correlative" or "non-correlative." Pathology reports relating to anatomy imaged in the specific imaging study were deemed correlative, whereas pathology reports describing anatomy not imaged with the particular study were denoted non-correlative. Overall, there was 88.3% correlation (accuracy) of the radiology and pathology reports (n = 8999). Subset analysis demonstrated that computed tomography (CT) abdomen/pelvis, CT head/neck/face, CT chest, musculoskeletal CT (excluding spine), mammography, magnetic resonance imaging (MRI) abdomen/pelvis, MRI brain, musculoskeletal MRI (excluding spine), breast MRI, positron emission tomography (PET), breast ultrasound, and head/neck ultrasound all demonstrated greater than 91% correlation. When further stratified by imaging modality, CT, MRI, mammography, and PET demonstrated excellent correlation (greater than 96.3%). Ultrasound and non-PET nuclear medicine studies demonstrated poorer correlation (80%). There is excellent correlation of radiology imaging reports and appropriate pathology reports when matched by organ system. Rapid, appropriate radiology-pathology report pairings provide an excellent opportunity to close feedback loop to the interpreting radiologist. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  3. Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Voytik-Harbin, Sherry L.; Filmer, David L.; Hoffman, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennis; Robinson, Joseph P.

    2002-05-01

    Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to its structure. Understanding this fibrous structure is very crucial in tissue engineering to develop the next generation of biomaterials for restoration of tissues and organs. In this paper, we integrate confocal microscopy imaging and image-processing techniques to analyze the structural properties of ECM. We describe a 2D fiber middle-line tracing algorithm and apply it via Euclidean distance maps (EDM) to extract accurate fibrous structure information, such as fiber diameter, length, orientation, and density, from single slices. Based on a 2D tracing algorithm, we extend our analysis to 3D tracing via Euclidean distance maps to extract 3D fibrous structure information. We use computer simulation to construct the 3D fibrous structure which is subsequently used to test our tracing algorithms. After further image processing, these models are then applied to a variety of ECM constructions from which results of 2D and 3D traces are statistically analyzed.

  4. Short-term change detection for UAV video

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.

  5. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  6. Digital correlation of DDRS data

    NASA Technical Reports Server (NTRS)

    Griffin, C. R.; Estes, J. M.

    1981-01-01

    The reduction of digital SAR (synthetic aperture radar) data to radar images for use in remote sensing applications was investigated. The critical software operations are discussed in detail, and suggestions and recommendations are made for improving the algorithms currently being used.

  7. Partial scan artifact reduction (PSAR) for the assessment of cardiac perfusion in dynamic phase-correlated CT.

    PubMed

    Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc

    2009-12-01

    Cardiac CT achieves its high temporal resolution by lowering the scan range from 2pi to pi plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the pi range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2pi] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan pn(AF) by projectionwise averaging a set of neighboring partial scans pn(P) from the same perfusion examination (typically N approximately 30 phase-correlated partial scans distributed over 20 s and n = 1, ..., N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans pn(V) from the artificial full scan pn(AF). A standard reconstruction yields the corresponding images fn(P), fn(AF), and fn(V). Subtracting the virtual partial scan image fn(V) from the artificial full scan image fn(AF) yields an artifact image that can be used to correct the original partial scan image: fn(C) = fn(P) - fn(V) + fn(AF), where fn(C) is the corrected image. The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference values. The improvement in the root mean square errors between the full and the partial scans with respect to the errors between the full and the corrected scans is up to 54% for the simulations and 90% for the measurements. The phase-correlated data now appear accurate enough for a quantitative analysis of cardiac perfusion.

  8. Ion Structure Near a Core-Shell Dielectric Nanoparticle

    NASA Astrophysics Data System (ADS)

    Ma, Manman; Gan, Zecheng; Xu, Zhenli

    2017-02-01

    A generalized image charge formulation is proposed for the Green's function of a core-shell dielectric nanoparticle for which theoretical and simulation investigations are rarely reported due to the difficulty of resolving the dielectric heterogeneity. Based on the formulation, an efficient and accurate algorithm is developed for calculating electrostatic polarization charges of mobile ions, allowing us to study related physical systems using the Monte Carlo algorithm. The computer simulations show that a fine-tuning of the shell thickness or the ion-interface correlation strength can greatly alter electric double-layer structures and capacitances, owing to the complicated interplay between dielectric boundary effects and ion-interface correlations.

  9. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  10. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less

  11. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  12. Correlation based efficient face recognition and color change detection

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Alam, M. S.; Qasmi, S.

    2013-01-01

    Identifying the human face via correlation is a topic attracting widespread interest. At the heart of this technique lies the comparison of an unknown target image to a known reference database of images. However, the color information in the target image remains notoriously difficult to interpret. In this paper, we report a new technique which: (i) is robust against illumination change, (ii) offers discrimination ability to detect color change between faces having similar shape, and (iii) is specifically designed to detect red colored stains (i.e. facial bleeding). We adopt the Vanderlugt correlator (VLC) architecture with a segmented phase filter and we decompose the color target image using normalized red, green, and blue (RGB), and hue, saturation, and value (HSV) scales. We propose a new strategy to effectively utilize color information in signatures for further increasing the discrimination ability. The proposed algorithm has been found to be very efficient for discriminating face subjects with different skin colors, and those having color stains in different areas of the facial image.

  13. Novel application of windowed beamforming function imaging for FLGPR

    NASA Astrophysics Data System (ADS)

    Xique, Ismael J.; Burns, Joseph W.; Thelen, Brian J.; LaRose, Ryan M.

    2018-04-01

    Backprojection of cross-correlated array data, using algorithms such as coherent interferometric imaging (Borcea, et al., 2006), has been advanced as a method to improve the statistical stability of images of targets in an inhomogeneous medium. Recently, the Windowed Beamforming Energy (WBE) function algorithm has been introduced as a functionally equivalent approach, which is significantly less computationally burdensome (Borcea, et al., 2011). WBE produces similar results through the use of a quadratic function summing signals after beamforming in transmission and reception, and windowing in the time domain. We investigate the application of WBE to improve the detection of buried targets with forward looking ground penetrating MIMO radar (FLGPR) data. The formulation of WBE as well the software implementation of WBE for the FLGPR data collection will be discussed. WBE imaging results are compared to standard backprojection and Coherence Factor imaging. Additionally, the effectiveness of WBE on field-collected data is demonstrated qualitatively through images and quantitatively through the use of a CFAR statistic on buried targets of a variety of contrast levels.

  14. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction.

    PubMed

    Stassi, D; Dutta, S; Ma, H; Soderman, A; Pazzani, D; Gros, E; Okerlund, D; Schmidt, T G

    2016-01-01

    Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three readers using a five point Likert scale. There was no statistically significant difference between inter-reader and reader-algorithm agreement for either MAD or CCC metrics (p > 0.1). The algorithm phase was within 2% of the consensus phase in 15/21 of cases. The average absolute difference between consensus and algorithm best phases was 2.29% ± 2.47%, with a maximum difference of 8%. Average image quality scores for the algorithm chosen best phase were 4.01 ± 0.65 overall, 3.33 ± 1.27 for right coronary artery (RCA), 4.50 ± 0.35 for left anterior descending (LAD) artery, and 4.50 ± 0.35 for left circumflex artery (LCX). Average image quality scores for the consensus best phase were 4.11 ± 0.54 overall, 3.44 ± 1.03 for RCA, 4.39 ± 0.39 for LAD, and 4.50 ± 0.18 for LCX. There was no statistically significant difference (p > 0.1) between the image quality scores of the algorithm phase and the consensus phase. The proposed algorithm was statistically equivalent to a reader in selecting an optimal cardiac phase for CCTA exams. When reader and algorithm phases differed by >2%, image quality as rated by blinded readers was statistically equivalent. By detecting the optimal phase for CCTA reconstruction, the proposed algorithm is expected to improve coronary artery visualization in CCTA exams.

  15. Gray matter segmentation of the spinal cord with active contours in MR images.

    PubMed

    Datta, Esha; Papinutto, Nico; Schlaeger, Regina; Zhu, Alyssa; Carballido-Gamio, Julio; Henry, Roland G

    2017-02-15

    Fully or partially automated spinal cord gray matter segmentation techniques for spinal cord gray matter segmentation will allow for pivotal spinal cord gray matter measurements in the study of various neurological disorders. The objective of this work was multi-fold: (1) to develop a gray matter segmentation technique that uses registration methods with an existing delineation of the cord edge along with Morphological Geodesic Active Contour (MGAC) models; (2) to assess the accuracy and reproducibility of the newly developed technique on 2D PSIR T1 weighted images; (3) to test how the algorithm performs on different resolutions and other contrasts; (4) to demonstrate how the algorithm can be extended to 3D scans; and (5) to show the clinical potential for multiple sclerosis patients. The MGAC algorithm was developed using a publicly available implementation of a morphological geodesic active contour model and the spinal cord segmentation tool of the software Jim (Xinapse Systems) for initial estimate of the cord boundary. The MGAC algorithm was demonstrated on 2D PSIR images of the C2/C3 level with two different resolutions, 2D T2* weighted images of the C2/C3 level, and a 3D PSIR image. These images were acquired from 45 healthy controls and 58 multiple sclerosis patients selected for the absence of evident lesions at the C2/C3 level. Accuracy was assessed though visual assessment, Hausdorff distances, and Dice similarity coefficients. Reproducibility was assessed through interclass correlation coefficients. Validity was assessed through comparison of segmented gray matter areas in images with different resolution for both manual and MGAC segmentations. Between MGAC and manual segmentations in healthy controls, the mean Dice similarity coefficient was 0.88 (0.82-0.93) and the mean Hausdorff distance was 0.61 (0.46-0.76) mm. The interclass correlation coefficient from test and retest scans of healthy controls was 0.88. The percent change between the manual segmentations from high and low-resolution images was 25%, while the percent change between the MGAC segmentations from high and low resolution images was 13%. Between MGAC and manual segmentations in MS patients, the average Dice similarity coefficient was 0.86 (0.8-0.92) and the average Hausdorff distance was 0.83 (0.29-1.37) mm. We demonstrate that an automatic segmentation technique, based on a morphometric geodesic active contours algorithm, can provide accurate and precise spinal cord gray matter segmentations on 2D PSIR images. We have also shown how this automated technique can potentially be extended to other imaging protocols. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Full field study of strain distribution near the crack tip in the fracture of solid propellants via large strain digital image correlation and optical microscopy

    NASA Astrophysics Data System (ADS)

    Gonzalez, Javier

    A full field method for visualizing deformation around the crack tip in a fracture process with large strains is developed. A digital image correlation program (DIC) is used to incrementally compute strains and displacements between two consecutive images of a deformation process. Values of strain and displacements for consecutive deformations are added, this way solving convergence problems in the DIC algorithm when large deformations are investigated. The method developed is used to investigate the strain distribution within 1 mm of the crack tip in a particulate composite solid (propellant) using microscopic visualization of the deformation process.

  17. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  18. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  19. Non-sky polarization-based dehazing algorithm for non-specular objects using polarization difference and global scene feature.

    PubMed

    Qu, Yufu; Zou, Zhaofan

    2017-10-16

    Photographic images taken in foggy or hazy weather (hazy images) exhibit poor visibility and detail because of scattering and attenuation of light caused by suspended particles, and therefore, image dehazing has attracted considerable research attention. The current polarization-based dehazing algorithms strongly rely on the presence of a "sky area", and thus, the selection of model parameters is susceptible to external interference of high-brightness objects and strong light sources. In addition, the noise of the restored image is large. In order to solve these problems, we propose a polarization-based dehazing algorithm that does not rely on the sky area ("non-sky"). First, a linear polarizer is used to collect three polarized images. The maximum- and minimum-intensity images are then obtained by calculation, assuming the polarization of light emanating from objects is negligible in most scenarios involving non-specular objects. Subsequently, the polarization difference of the two images is used to determine a sky area and calculate the infinite atmospheric light value. Next, using the global features of the image, and based on the assumption that the airlight and object radiance are irrelevant, the degree of polarization of the airlight (DPA) is calculated by solving for the optimal solution of the correlation coefficient equation between airlight and object radiance; the optimal solution is obtained by setting the right-hand side of the equation to zero. Then, the hazy image is subjected to dehazing. Subsequently, a filtering denoising algorithm, which combines the polarization difference information and block-matching and 3D (BM3D) filtering, is designed to filter the image smoothly. Our experimental results show that the proposed polarization-based dehazing algorithm does not depend on whether the image includes a sky area and does not require complex models. Moreover, the dehazing image except specular object scenarios is superior to those obtained by Tarel, Fattal, Ren, and Berman based on the criteria of no-reference quality assessment (NRQA), blind/referenceless image spatial quality evaluator (BRISQUE), blind anistropic quality index (AQI), and e.

  20. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Brunner, Jason; Feltz, Wayne; Moses, John; Rabin, Robert; Ackerman, Steven

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V' signature, has been observed to occur during and preceding severe weather in previous studies. This study describes an algorithmic approach to objectively detect enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of cross correlation statistics of pixels and thresholds of enhanced-V quantitative parameters. The effectiveness of the enhanced-V detection method will be examined using Geostationary Operational Environmental Satellite, MODerate-resolution Imaging Spectroradiometer, and Advanced Very High Resolution Radiometer image data from case studies in the 2003-2006 seasons. The main goal of this study is to develop an objective enhanced-V detection algorithm for future implementation into operations with future sensors, such as GOES-R.

  1. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  2. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  3. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    PubMed

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  4. Symmetric encryption algorithms using chaotic and non-chaotic generators: A review

    PubMed Central

    Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.

    2015-01-01

    This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561

  5. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1990-01-01

    A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  6. Digital Data Registration and Differencing Compression System

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1996-01-01

    A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.

  7. Digital data registration and differencing compression system

    NASA Technical Reports Server (NTRS)

    Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)

    1992-01-01

    A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.

  8. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  9. Robust watermark technique using masking and Hermite transform.

    PubMed

    Coronel, Sandra L Gomez; Ramírez, Boris Escalante; Mosqueda, Marco A Acevedo

    2016-01-01

    The following paper evaluates a watermark algorithm designed for digital images by using a perceptive mask and a normalization process, thus preventing human eye detection, as well as ensuring its robustness against common processing and geometric attacks. The Hermite transform is employed because it allows a perfect reconstruction of the image, while incorporating human visual system properties; moreover, it is based on the Gaussian functions derivates. The applied watermark represents information of the digital image proprietor. The extraction process is blind, because it does not require the original image. The following techniques were utilized in the evaluation of the algorithm: peak signal-to-noise ratio, the structural similarity index average, the normalized crossed correlation, and bit error rate. Several watermark extraction tests were performed, with against geometric and common processing attacks. It allowed us to identify how many bits in the watermark can be modified for its adequate extraction.

  10. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  11. Fabric pilling measurement using three-dimensional image

    NASA Astrophysics Data System (ADS)

    Ouyang, Wenbin; Wang, Rongwu; Xu, Bugao

    2013-10-01

    We introduce a stereovision system and the three-dimensional (3-D) image analysis algorithms for fabric pilling measurement. Based on the depth information available in the 3-D image, the pilling detection process starts from the seed searching at local depth maxima to the region growing around the selected seeds using both depth and distance criteria. After the pilling detection, the density, height, and area of individual pills in the image can be extracted to describe the pilling appearance. According to the multivariate regression analysis on the 3-D images of 30 cotton fabrics treated by the random-tumble and home-laundering machines, the pilling grade is highly correlated with the pilling density (R=0.923) but does not consistently change with the pilling height and area. The pilling densities measured from the 3-D images also correlate well with those counted manually from the samples (R=0.985).

  12. Efficient content-based low-altitude images correlated network and strips reconstruction

    NASA Astrophysics Data System (ADS)

    He, Haiqing; You, Qi; Chen, Xiaoyong

    2017-01-01

    The manual intervention method is widely used to reconstruct strips for further aerial triangulation in low-altitude photogrammetry. Clearly the method for fully automatic photogrammetric data processing is not an expected way. In this paper, we explore a content-based approach without manual intervention or external information for strips reconstruction. Feature descriptors in the local spatial patterns are extracted by SIFT to construct vocabulary tree, in which these features are encoded in terms of TF-IDF numerical statistical algorithm to generate new representation for each low-altitude image. Then images correlated network is reconstructed by similarity measure, image matching and geometric graph theory. Finally, strips are reconstructed automatically by tracing straight lines and growing adjacent images gradually. Experimental results show that the proposed approach is highly effective in automatically rearranging strips of lowaltitude images and can provide rough relative orientation for further aerial triangulation.

  13. FIVE YEARS OF SYNTHESIS OF SOLAR SPECTRAL IRRADIANCE FROM SDID/SISA AND SDO /AIA IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Codrescu, M.; Fedrizzi, M.

    In this paper we describe the synthetic solar spectral irradiance (SSI) calculated from 2010 to 2015 using data from the Atmospheric Imaging Assembly (AIA) instrument, on board the Solar Dynamics Observatory spacecraft. We used the algorithms for solar disk image decomposition (SDID) and the spectral irradiance synthesis algorithm (SISA) that we had developed over several years. The SDID algorithm decomposes the images of the solar disk into areas occupied by nine types of chromospheric and 5 types of coronal physical structures. With this decomposition and a set of pre-computed angle-dependent spectra for each of the features, the SISA algorithm ismore » used to calculate the SSI. We discuss the application of the basic SDID/SISA algorithm to a subset of the AIA images and the observed variation occurring in the 2010–2015 period of the relative areas of the solar disk covered by the various solar surface features. Our results consist of the SSI and total solar irradiance variations over the 2010–2015 period. The SSI results include soft X-ray, ultraviolet, visible, infrared, and far-infrared observations and can be used for studies of the solar radiative forcing of the Earth’s atmosphere. These SSI estimates were used to drive a thermosphere–ionosphere physical simulation model. Predictions of neutral mass density at low Earth orbit altitudes in the thermosphere and peak plasma densities at mid-latitudes are in reasonable agreement with the observations. The correlation between the simulation results and the observations was consistently better when fluxes computed by SDID/SISA procedures were used.« less

  14. Improved decryption quality and security of a joint transform correlator-based encryption system

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet

    2013-02-01

    Some image encryption systems based on modified double random phase encoding and joint transform correlator architecture produce low quality decrypted images and are vulnerable to a variety of attacks. In this work, we analyse the algorithm of some reported methods that optically implement the double random phase encryption in a joint transform correlator. We show that it is possible to significantly improve the quality of the decrypted image by introducing a simple nonlinear operation in the encrypted function that contains the joint power spectrum. This nonlinearity also makes the system more resistant to chosen-plaintext attacks. We additionally explore the system resistance against this type of attack when a variety of probability density functions are used to generate the two random phase masks of the encryption-decryption process. Numerical results are presented and discussed.

  15. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    PubMed Central

    Zheng, Yu; Yang, Yang; Chen, Wu

    2017-01-01

    In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR) is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF) signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm. PMID:28672830

  16. The design of an fast Fourier filter for enhancing diagnostically relevant structures - endodontic files.

    PubMed

    Bruellmann, Dan; Sander, Steven; Schmidtmann, Irene

    2016-05-01

    The endodontic working length is commonly determined by electronic apex locators and intraoral periapical radiographs. No algorithms for the automatic detection of endodontic files in dental radiographs have been described in the recent literature. Teeth from the mandibles of pig cadavers were accessed, and digital radiographs of these specimens were obtained using an optical bench. The specimens were then recorded in identical positions and settings after the insertion of endodontic files of known sizes (ISO sizes 10-15). The frequency bands generated by the endodontic files were determined using fast Fourier transforms (FFTs) to convert the resulting images into frequency spectra. The detected frequencies were used to design a pre-segmentation filter, which was programmed using Delphi XE RAD Studio software (Embarcadero Technologies, San Francisco, USA) and tested on 20 radiographs. For performance evaluation purposes, the gauged lengths (measured with a caliper) of visible endodontic files were measured in the native and filtered images. The software was able to segment the endodontic files in both the samples and similar dental radiographs. We observed median length differences of 0.52 mm (SD: 2.76 mm) and 0.46 mm (SD: 2.33 mm) in the native and post-segmentation images, respectively. Pearson's correlation test revealed a significant correlation of 0.915 between the true length and the measured length in the native images; the corresponding correlation for the filtered images was 0.97 (p=0.0001). The algorithm can be used to automatically detect and measure the lengths of endodontic files in digital dental radiographs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  18. A new algorithm for ECG interference removal from single channel EMG recording.

    PubMed

    Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein

    2017-09-01

    This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.

  19. WE-AB-207B-05: Correlation of Normal Lung Density Changes with Dose After Stereotactic Body Radiotherapy (SBRT) for Early Stage Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Q; Devpura, S; Feghali, K

    2016-06-15

    Purpose: To investigate correlation of normal lung CT density changes with dose accuracy and outcome after SBRT for patients with early stage lung cancer. Methods: Dose distributions for patients originally planned and treated using a 1-D pencil beam-based (PB-1D) dose algorithm were retrospectively recomputed using algorithms: 3-D pencil beam (PB-3D), and model-based Methods: AAA, Acuros XB (AXB), and Monte Carlo (MC). Prescription dose was 12 Gy × 4 fractions. Planning CT images were rigidly registered to the followup CT datasets at 6–9 months after treatment. Corresponding dose distributions were mapped from the planning to followup CT images. Following the methodmore » of Palma et al .(1–2), Hounsfield Unit (HU) changes in lung density in individual, 5 Gy, dose bins from 5–45 Gy were assessed in the peri-tumor region, defined as a uniform, 3 cm expansion around the ITV(1). Results: There is a 10–15% displacement of the high dose region (40–45 Gy) with the model-based algorithms, relative to the PB method, due to the electron scattering of dose away from the tumor into normal lung tissue (Fig.1). Consequently, the high-dose lung region falls within the 40–45 Gy dose range, causing an increase in HU change in this region, as predicted by model-based algorithms (Fig.2). The patient with the highest HU change (∼110) had mild radiation pneumonitis, and the patient with HU change of ∼80–90 had shortness of breath. No evidence of pneumonitis was observed for the 3 patients with smaller CT density changes (<50 HU). Changes in CT densities, and dose-response correlation, as computed with model-based algorithms, are in excellent agreement with the findings of Palma et al. (1–2). Conclusion: Dose computed with PB (1D or 3D) algorithms was poorly correlated with clinically relevant CT density changes, as opposed to model-based algorithms. A larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  20. Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang

    2016-10-01

    Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.

  1. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  2. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  3. Quantification of atherosclerosis with MRI and image processing in spontaneously hyperlipidemic rabbits.

    PubMed

    Hänni, Mari; Edvardsson, H; Wågberg, M; Pettersson, K; Smedby, O

    2004-01-01

    The need for a quantitative method to assess atherosclerosis in vivo is well known. This study tested, in a familiar animal model of atherosclerosis, a combination of magnetic resonance imaging (MRI) and image processing. Six spontaneously hyperlipidemic (Watanabe) rabbits were examined with a knee coil in a 1.5-T clinical MRI scanner. Inflow angio (2DI) and proton density weighted (PDW) images were acquired to examine 10 cm of the aorta immediately cranial to the aortic bifurcation. Examination of the thoracic aorta was added in four animals. To identify the inner and outer boundary of the arterial wall, a dynamic contour algorithm (Gradient Vector Flow snakes) was applied to the 2DI and PDW images, respectively, after which the vessel wall area was calculated. The results were compared with histopathological measurements of intima and intima-media cross-sectional area. The correlation coefficient between wall area measurements with MRI snakes and intima-media area was 0.879 when computed individual-wise for abdominal aortas, 0.958 for thoracic aortas, and 0.834 when computed segment-wise. When the algorithm was applied to the PDW images only, somewhat lower correlations were obtained. The MRI yielded significantly higher values than histopathology, which excludes the adventitia. Magnetic resonance imaging, in combination with dynamic contours, may be a suitable technique for quantitative assessment of atherosclerosis in vivo. Using two sequences for the measurement seems to be superior to using a single sequence.

  4. Objective evaluation of linear and nonlinear tomosynthetic reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Webber, Richard L.; Hemler, Paul F.; Lavery, John E.

    2000-04-01

    This investigation objectively tests five different tomosynthetic reconstruction methods involving three different digital sensors, each used in a different radiologic application: chest, breast, and pelvis, respectively. The common task was to simulate a specific representative projection for each application by summation of appropriately shifted tomosynthetically generated slices produced by using the five algorithms. These algorithms were, respectively, (1) conventional back projection, (2) iteratively deconvoluted back projection, (3) a nonlinear algorithm similar to back projection, except that the minimum value from all of the component projections for each pixel is computed instead of the average value, (4) a similar algorithm wherein the maximum value was computed instead of the minimum value, and (5) the same type of algorithm except that the median value was computed. Using these five algorithms, we obtained data from each sensor-tissue combination, yielding three factorially distributed series of contiguous tomosynthetic slices. The respective slice stacks then were aligned orthogonally and averaged to yield an approximation of a single orthogonal projection radiograph of the complete (unsliced) tissue thickness. Resulting images were histogram equalized, and actual projection control images were subtracted from their tomosynthetically synthesized counterparts. Standard deviations of the resulting histograms were recorded as inverse figures of merit (FOMs). Visual rankings of image differences by five human observers of a subset (breast data only) also were performed to determine whether their subjective observations correlated with homologous FOMs. Nonparametric statistical analysis of these data demonstrated significant differences (P > 0.05) between reconstruction algorithms. The nonlinear minimization reconstruction method nearly always outperformed the other methods tested. Observer rankings were similar to those measured objectively.

  5. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  6. Inferring functional connectivity in MRI using Bayesian network structure learning with a modified PC algorithm

    PubMed Central

    Iyer, Swathi; Shafran, Izhak; Grayson, David; Gates, Kathleen; Nigg, Joel; Fair, Damien

    2013-01-01

    Resting state functional connectivity MRI (rs-fcMRI) is a popular technique used to gauge the functional relatedness between regions in the brain for typical and special populations. Most of the work to date determines this relationship by using Pearson's correlation on BOLD fMRI timeseries. However, it has been recognized that there are at least two key limitations to this method. First, it is not possible to resolve the direct and indirect connections/influences. Second, the direction of information flow between the regions cannot be differentiated. In the current paper, we follow-up on recent work by Smith et al (2011), and apply a Bayesian approach called the PC algorithm to both simulated data and empirical data to determine whether these two factors can be discerned with group average, as opposed to single subject, functional connectivity data. When applied on simulated individual subjects, the algorithm performs well determining indirect and direct connection but fails in determining directionality. However, when applied at group level, PC algorithm gives strong results for both indirect and direct connections and the direction of information flow. Applying the algorithm on empirical data, using a diffusion-weighted imaging (DWI) structural connectivity matrix as the baseline, the PC algorithm outperformed the direct correlations. We conclude that, under certain conditions, the PC algorithm leads to an improved estimate of brain network structure compared to the traditional connectivity analysis based on correlations. PMID:23501054

  7. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  8. Correlation between external and internal respiratory motion: a validation study.

    PubMed

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-05-01

    In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.

  9. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  10. Development and evaluation of a vision based poultry debone line monitoring system

    NASA Astrophysics Data System (ADS)

    Usher, Colin T.; Daley, W. D. R.

    2013-05-01

    Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system's image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R2 of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.

  11. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    PubMed

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  12. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    PubMed Central

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D.; Joel, Suresh; Pekar, James J.; Mostofsky, Stewart H.; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD. PMID:22969709

  13. Subarray Processing for Projection-based RFI Mitigation in Radio Astronomical Interferometers

    NASA Astrophysics Data System (ADS)

    Burnett, Mitchell C.; Jeffs, Brian D.; Black, Richard A.; Warnick, Karl F.

    2018-04-01

    Radio Frequency Interference (RFI) is a major problem for observations in Radio Astronomy (RA). Adaptive spatial filtering techniques such as subspace projection are promising candidates for RFI mitigation; however, for radio interferometric imaging arrays, these have primarily been used in engineering demonstration experiments rather than mainstream scientific observations. This paper considers one reason that adoption of such algorithms is limited: RFI decorrelates across the interferometric array because of long baseline lengths. This occurs when the relative RFI time delay along a baseline is large compared to the frequency channel inverse bandwidth used in the processing chain. Maximum achievable excision of the RFI is limited by covariance matrix estimation error when identifying interference subspace parameters, and decorrelation of the RFI introduces errors that corrupt the subspace estimate, rendering subspace projection ineffective over the entire array. In this work, we present an algorithm that overcomes this challenge of decorrelation by applying subspace projection via subarray processing (SP-SAP). Each subarray is designed to have a set of elements with high mutual correlation in the interferer for better estimation of subspace parameters. In an RFI simulation scenario for the proposed ngVLA interferometric imaging array with 15 kHz channel bandwidth for correlator processing, we show that compared to the former approach of applying subspace projection on the full array, SP-SAP improves mitigation of the RFI on the order of 9 dB. An example of improved image synthesis and reduced RFI artifacts for a simulated image “phantom” using the SP-SAP algorithm is presented.

  14. A refined methodology for modeling volume quantification performance in CT

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  15. New development of the image matching algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Feng, Zhao

    2018-04-01

    To study the image matching algorithm, algorithm four elements are described, i.e., similarity measurement, feature space, search space and search strategy. Four common indexes for evaluating the image matching algorithm are described, i.e., matching accuracy, matching efficiency, robustness and universality. Meanwhile, this paper describes the principle of image matching algorithm based on the gray value, image matching algorithm based on the feature, image matching algorithm based on the frequency domain analysis, image matching algorithm based on the neural network and image matching algorithm based on the semantic recognition, and analyzes their characteristics and latest research achievements. Finally, the development trend of image matching algorithm is discussed. This study is significant for the algorithm improvement, new algorithm design and algorithm selection in practice.

  16. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  17. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  18. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  19. Tie Points Extraction for SAR Images Based on Differential Constraints

    NASA Astrophysics Data System (ADS)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  20. A semi-automated technique for labeling and counting of apoptosing retinal cells

    PubMed Central

    2014-01-01

    Background Retinal ganglion cell (RGC) loss is one of the earliest and most important cellular changes in glaucoma. The DARC (Detection of Apoptosing Retinal Cells) technology enables in vivo real-time non-invasive imaging of single apoptosing retinal cells in animal models of glaucoma and Alzheimer’s disease. To date, apoptosing RGCs imaged using DARC have been counted manually. This is time-consuming, labour-intensive, vulnerable to bias, and has considerable inter- and intra-operator variability. Results A semi-automated algorithm was developed which enabled automated identification of apoptosing RGCs labeled with fluorescent Annexin-5 on DARC images. Automated analysis included a pre-processing stage involving local-luminance and local-contrast “gain control”, a “blob analysis” step to differentiate between cells, vessels and noise, and a method to exclude non-cell structures using specific combined ‘size’ and ‘aspect’ ratio criteria. Apoptosing retinal cells were counted by 3 masked operators, generating ‘Gold-standard’ mean manual cell counts, and were also counted using the newly developed automated algorithm. Comparison between automated cell counts and the mean manual cell counts on 66 DARC images showed significant correlation between the two methods (Pearson’s correlation coefficient 0.978 (p < 0.001), R Squared = 0.956. The Intraclass correlation coefficient was 0.986 (95% CI 0.977-0.991, p < 0.001), and Cronbach’s alpha measure of consistency = 0.986, confirming excellent correlation and consistency. No significant difference (p = 0.922, 95% CI: −5.53 to 6.10) was detected between the cell counts of the two methods. Conclusions The novel automated algorithm enabled accurate quantification of apoptosing RGCs that is highly comparable to manual counting, and appears to minimise operator-bias, whilst being both fast and reproducible. This may prove to be a valuable method of quantifying apoptosing retinal cells, with particular relevance to translation in the clinic, where a Phase I clinical trial of DARC in glaucoma patients is due to start shortly. PMID:24902592

  1. Direct biomechanical modeling of trabecular bone using a nonlinear manifold-based volumetric representation

    NASA Astrophysics Data System (ADS)

    Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.

    2017-03-01

    Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.

  2. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  3. Development of a Calibration Strip for Immunochromatographic Assay Detection Systems.

    PubMed

    Gao, Yue-Ming; Wei, Jian-Chong; Mak, Peng-Un; Vai, Mang-I; Du, Min; Pun, Sio-Hang

    2016-06-29

    With many benefits and applications, immunochromatographic (ICG) assay detection systems have been reported on a great deal. However, the existing research mainly focuses on increasing the dynamic detection range or application fields. Calibration of the detection system, which has a great influence on the detection accuracy, has not been addressed properly. In this context, this work develops a calibration strip for ICG assay photoelectric detection systems. An image of the test strip is captured by an image acquisition device, followed by performing a fuzzy c-means (FCM) clustering algorithm and maximin-distance algorithm for image segmentation. Additionally, experiments are conducted to find the best characteristic quantity. By analyzing the linear coefficient, an average value of hue (H) at 14 min is chosen as the characteristic quantity and the empirical formula between H and optical density (OD) value is established. Therefore, H, saturation (S), and value (V) are calculated by a number of selected OD values. Then, H, S, and V values are transferred to the RGB color space and a high-resolution printer is used to print the strip images on cellulose nitrate membranes. Finally, verification of the printed calibration strips is conducted by analyzing the linear correlation between OD and the spectral reflectance, which shows a good linear correlation (R² = 98.78%).

  4. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  5. "Radio-oncomics" : The potential of radiomics in radiation oncology.

    PubMed

    Peeken, Jan Caspar; Nüsslin, Fridtjof; Combs, Stephanie E

    2017-10-01

    Radiomics, a recently introduced concept, describes quantitative computerized algorithm-based feature extraction from imaging data including computer tomography (CT), magnetic resonance imaging (MRT), or positron-emission tomography (PET) images. For radiation oncology it offers the potential to significantly influence clinical decision-making and thus therapy planning and follow-up workflow. After image acquisition, image preprocessing, and defining regions of interest by structure segmentation, algorithms are applied to calculate shape, intensity, texture, and multiscale filter features. By combining multiple features and correlating them with clinical outcome, prognostic models can be created. Retrospective studies have proposed radiomics classifiers predicting, e. g., overall survival, radiation treatment response, distant metastases, or radiation-related toxicity. Besides, radiomics features can be correlated with genomic information ("radiogenomics") and could be used for tumor characterization. Distinct patterns based on data-based as well as genomics-based features will influence radiation oncology in the future. Individualized treatments in terms of dose level adaption and target volume definition, as well as other outcome-related parameters will depend on radiomics and radiogenomics. By integration of various datasets, the prognostic power can be increased making radiomics a valuable part of future precision medicine approaches. This perspective demonstrates the evidence for the radiomics concept in radiation oncology. The necessity of further studies to integrate radiomics classifiers into clinical decision-making and the radiation therapy workflow is emphasized.

  6. Approach for scene reconstruction from the analysis of a triplet of still images

    NASA Astrophysics Data System (ADS)

    Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle

    1997-03-01

    Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.

  7. User-guided automated segmentation of time-series ultrasound images for measuring vasoreactivity of the brachial artery induced by flow mediation

    NASA Astrophysics Data System (ADS)

    Sehgal, Chandra M.; Kao, Yen H.; Cary, Ted W.; Arger, Peter H.; Mohler, Emile R.

    2005-04-01

    Endothelial dysfunction in response to vasoactive stimuli is closely associated with diseases such as atherosclerosis, hypertension and congestive heart failure. The current method of using ultrasound to image the brachial artery along the longitudinal axis is insensitive for measuring the small vasodilatation that occurs in response to flow mediation. The goal of this study is to overcome this limitation by using cross-sectional imaging of the brachial artery in conjunction with the User-Guided Automated Boundary Detection (UGABD) algorithm for extracting arterial boundaries. High-resolution ultrasound imaging was performed on rigid plastic tubing, on elastic rubber tubing phantoms with steady and pulsatile flow, and on the brachial artery of a healthy volunteer undergoing reactive hyperemia. The area of cross section of time-series images was analyzed by UGABD by propagating the boundary from one frame to the next. The UGABD results were compared by linear correlation with those obtained by manual tracing. UGABD measured the cross-sectional area of the phantom tubing to within 5% of the true area. The algorithm correctly detected pulsatile vasomotion in phantoms and in the brachial artery. A comparison of area measurements made using UGABD with those made by manual tracings yielded a correlation of 0.9 and 0.8 for phantoms and arteries, respectively. The peak vasodilatation due to reactive hyperemia was two orders of magnitude greater in pixel count than that measured by longitudinal imaging. Cross-sectional imaging is more sensitive than longitudinal imaging for measuring flow-mediated dilatation of brachial artery, and thus may be more suitable for evaluating endothelial dysfunction.

  8. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  9. Assessing the scale of tumor heterogeneity by complete hierarchical segmentation of MRI.

    PubMed

    Gensheimer, Michael F; Hawkins, Douglas S; Ermoian, Ralph P; Trister, Andrew D

    2015-02-07

    In many cancers, intratumoral heterogeneity has been found in histology, genetic variation and vascular structure. We developed an algorithm to interrogate different scales of heterogeneity using clinical imaging. We hypothesize that heterogeneity of perfusion at coarse scale may correlate with treatment resistance and propensity for disease recurrence. The algorithm recursively segments the tumor image into increasingly smaller regions. Each dividing line is chosen so as to maximize signal intensity difference between the two regions. This process continues until the tumor has been divided into single voxels, resulting in segments at multiple scales. For each scale, heterogeneity is measured by comparing each segmented region to the adjacent region and calculating the difference in signal intensity histograms. Using digital phantom images, we showed that the algorithm is robust to image artifacts and various tumor shapes. We then measured the primary tumor scales of contrast enhancement heterogeneity in MRI of 18 rhabdomyosarcoma patients. Using Cox proportional hazards regression, we explored the influence of heterogeneity parameters on relapse-free survival. Coarser scale of maximum signal intensity heterogeneity was prognostic of shorter survival (p = 0.05). By contrast, two fractal parameters and three Haralick texture features were not prognostic. In summary, our algorithm produces a biologically motivated segmentation of tumor regions and reports the amount of heterogeneity at various distance scales. If validated on a larger dataset, this prognostic imaging biomarker could be useful to identify patients at higher risk for recurrence and candidates for alternative treatment.

  10. A Comparative Study of Random Patterns for Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Stoilov, G.; Kavardzhikov, V.; Pashkouleva, D.

    2012-06-01

    Digital Image Correlation (DIC) is a computer based image analysis technique utilizing random patterns, which finds applications in experimental mechanics of solids and structures. In this paper a comparative study of three simulated random patterns is done. One of them is generated according to a new algorithm, introduced by the authors. A criterion for quantitative evaluation of random patterns after the calculation of their autocorrelation functions is introduced. The patterns' deformations are simulated numerically and realized experimentally. The displacements are measured by using the DIC method. Tensile tests are performed after printing the generated random patterns on surfaces of standard iron sheet specimens. It is found that the new designed random pattern keeps relatively good quality until reaching 20% deformation.

  11. Symmetric Phase-Only Filtering in Particle-Image Velocimetry

    NASA Technical Reports Server (NTRS)

    Wemet, Mark P.

    2008-01-01

    Symmetrical phase-only filtering (SPOF) can be exploited to obtain substantial improvements in the results of data processing in particle-image velocimetry (PIV). In comparison with traditional PIV data processing, SPOF PIV data processing yields narrower and larger amplitude correlation peaks, thereby providing more-accurate velocity estimates. The higher signal-to-noise ratios associated with the higher amplitude correlation peaks afford greater robustness and reliability of processing. SPOF also affords superior performance in the presence of surface flare light and/or background light. SPOF algorithms can readily be incorporated into pre-existing algorithms used to process digitized image data in PIV, without significantly increasing processing times. A summary of PIV and traditional PIV data processing is prerequisite to a meaningful description of SPOF PIV processing. In PIV, a pulsed laser is used to illuminate a substantially planar region of a flowing fluid in which particles are entrained. An electronic camera records digital images of the particles at two instants of time. The components of velocity of the fluid in the illuminated plane can be obtained by determining the displacements of particles between the two illumination pulses. The objective in PIV data processing is to compute the particle displacements from the digital image data. In traditional PIV data processing, to which the present innovation applies, the two images are divided into a grid of subregions and the displacements determined from cross-correlations between the corresponding sub-regions in the first and second images. The cross-correlation process begins with the calculation of the Fourier transforms (or fast Fourier transforms) of the subregion portions of the images. The Fourier transforms from the corresponding subregions are multiplied, and this product is inverse Fourier transformed, yielding the cross-correlation intensity distribution. The average displacement of the particles across a subregion results in a displacement of the correlation peak from the center of the correlation plane. The velocity is then computed from the displacement of the correlation peak and the time between the recording of the two images. The process as described thus far is performed for all the subregions. The resulting set of velocities in grid cells amounts to a velocity vector map of the flow field recorded on the image plane. In traditional PIV processing, surface flare light and bright background light give rise to a large, broad correlation peak, at the center of the correlation plane, that can overwhelm the true particle- displacement correlation peak. This has made it necessary to resort to tedious image-masking and background-subtraction procedures to recover the relatively small amplitude particle-displacement correlation peak. SPOF is a variant of phase-only filtering (POF), which, in turn, is a variant of matched spatial filtering (MSF). In MSF, one projects a first image (denoted the input image) onto a second image (denoted the filter) as part of a computation to determine how much and what part of the filter is present in the input image. MSF is equivalent to cross-correlation. In POF, the frequency-domain content of the MSF filter is modified to produce a unitamplitude (phase-only) object. POF is implemented by normalizing the Fourier transform of the filter by its magnitude. The advantage of POFs is that they yield correlation peaks that are sharper and have higher signal-to-noise ratios than those obtained through traditional MSF. In the SPOF, these benefits of POF can be extended to PIV data processing. The SPOF yields even better performance than the POF approach, which is uniquely applicable to PIV type image data. In SPOF as now applied to PIV data processing, a subregion of the first image is treated as the input image and the corresponding subregion of the second image is treated as the filter. The Fourier transforms from both the firs and second- image subregions are normalized by the square roots of their respective magnitudes. This scheme yields optimal performance because the amounts of normalization applied to the spatial-frequency contents of the input and filter scenes are just enough to enhance their high-spatial-frequency contents while reducing their spurious low-spatial-frequency content. As a result, in SPOF PIV processing, particle-displacement correlation peaks can readily be detected above spurious background peaks, without need for masking or background subtraction.

  12. A Comparison of 3D3C Velocity Measurement Techniques

    NASA Astrophysics Data System (ADS)

    La Foy, Roderick; Vlachos, Pavlos

    2013-11-01

    The velocity measurement fidelity of several 3D3C PIV measurement techniques including tomographic PIV, synthetic aperture PIV, plenoptic PIV, defocusing PIV, and 3D PTV are compared in simulations. A physically realistic ray-tracing algorithm is used to generate synthetic images of a standard calibration grid and of illuminated particle fields advected by homogeneous isotropic turbulence. The simulated images for the tomographic, synthetic aperture, and plenoptic PIV cases are then used to create three-dimensional reconstructions upon which cross-correlations are performed to yield the measured velocity field. Particle tracking algorithms are applied to the images for the defocusing PIV and 3D PTV to directly yield the three-dimensional velocity field. In all cases the measured velocity fields are compared to one-another and to the true velocity field using several metrics.

  13. Dual-model automatic detection of nerve-fibres in corneal confocal microscopy images.

    PubMed

    Dabbah, M A; Graham, J; Petropoulos, I; Tavakoli, M; Malik, R A

    2010-01-01

    Corneal Confocal Microscopy (CCM) imaging is a non-invasive surrogate of detecting, quantifying and monitoring diabetic peripheral neuropathy. This paper presents an automated method for detecting nerve-fibres from CCM images using a dual-model detection algorithm and compares the performance to well-established texture and feature detection methods. The algorithm comprises two separate models, one for the background and another for the foreground (nerve-fibres), which work interactively. Our evaluation shows significant improvement (p approximately 0) in both error rate and signal-to-noise ratio of this model over the competitor methods. The automatic method is also evaluated in comparison with manual ground truth analysis in assessing diabetic neuropathy on the basis of nerve-fibre length, and shows a strong correlation (r = 0.92). Both analyses significantly separate diabetic patients from control subjects (p approximately 0).

  14. A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR.

    PubMed

    Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei

    2016-02-26

    Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method.

  15. A Novel General Imaging Formation Algorithm for GNSS-Based Bistatic SAR

    PubMed Central

    Zeng, Hong-Cheng; Wang, Peng-Bo; Chen, Jie; Liu, Wei; Ge, LinLin; Yang, Wei

    2016-01-01

    Global Navigation Satellite System (GNSS)-based bistatic Synthetic Aperture Radar (SAR) recently plays a more and more significant role in remote sensing applications for its low-cost and real-time global coverage capability. In this paper, a general imaging formation algorithm was proposed for accurately and efficiently focusing GNSS-based bistatic SAR data, which avoids the interpolation processing in traditional back projection algorithms (BPAs). A two-dimensional point target spectrum model was firstly presented, and the bulk range cell migration correction (RCMC) was consequently derived for reducing range cell migration (RCM) and coarse focusing. As the bulk RCMC seriously changes the range history of the radar signal, a modified and much more efficient hybrid correlation operation was introduced for compensating residual phase errors. Simulation results were presented based on a general geometric topology with non-parallel trajectories and unequal velocities for both transmitter and receiver platforms, showing a satisfactory performance by the proposed method. PMID:26927117

  16. No-reference image quality assessment based on natural scene statistics and gradient magnitude similarity

    NASA Astrophysics Data System (ADS)

    Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang

    2014-11-01

    The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.

  17. Smile detectors correlation

    NASA Astrophysics Data System (ADS)

    Yuksel, Kivanc; Chang, Xin; Skarbek, Władysław

    2017-08-01

    The novel smile recognition algorithm is presented based on extraction of 68 facial salient points (fp68) using the ensemble of regression trees. The smile detector exploits the Support Vector Machine linear model. It is trained with few hundreds exemplar images by SVM algorithm working in 136 dimensional space. It is shown by the strict statistical data analysis that such geometric detector strongly depends on the geometry of mouth opening area, measured by triangulation of outer lip contour. To this goal two Bayesian detectors were developed and compared with SVM detector. The first uses the mouth area in 2D image, while the second refers to the mouth area in 3D animated face model. The 3D modeling is based on Candide-3 model and it is performed in real time along with three smile detectors and statistics estimators. The mouth area/Bayesian detectors exhibit high correlation with fp68/SVM detector in a range [0:8; 1:0], depending mainly on light conditions and individual features with advantage of 3D technique, especially in hard light conditions.

  18. Long-range depth profiling of camouflaged targets using single-photon detection

    NASA Astrophysics Data System (ADS)

    Tobin, Rachael; Halimi, Abderrahim; McCarthy, Aongus; Ren, Ximing; McEwan, Kenneth J.; McLaughlin, Stephen; Buller, Gerald S.

    2018-03-01

    We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.

  19. Evaluation of various deformable image registrations for point and volume variations

    NASA Astrophysics Data System (ADS)

    Han, Su Chul; Lee, Soon Sung; Kim, Mi-Sook; Ji, Young Hoon; Kim, Kum Bae; Choi, Sang Hyun; Park, Seungwoo; Jung, Haijo; Yoo, Hyung Jun; Yi, Chul Young

    2015-07-01

    The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiationtreatment planning. Many groups have studied the accuracy of DIR. In this study, we evaluatedthe accuracy of various DIR algorithms by using variations of the deformation point and volume.The reference image (I ref ) and volume (V ref ) were first generated by using virtual deformation QAsoftware (ImSimQA, Oncology System Limited, UK). We deformed I ref with axial movement of thedeformation point and V ref , depending on the type of deformation (relaxation and contraction) inImSimQA software. The deformed image (I def ) and volume (V def ) acquired by using the ImSimQAsoftware were inversely deformed relative to I ref and V ref by using DIR algorithms. As a result,we acquired a deformed image (I id ) from I def and volume (V id ) from V ref . Four intensity-basedalgorithms were tested by following the horn-schunk optical flow (HS), iterative optical flow (IOF),modified demons (MD) and fast demons (FD) with the Deformable Image Registration and AdaptiveRadiotherapy Toolkit (DIRART) of MATLAB. The image similarity between I ref and I id wascalculated to evaluate the accuracy of DIR algorithms using by Normalized Mutual Information(NMI) and Normalized Cross Correlation (NCC) metrics, when the distance of point deformationwas moved 4 mm, the value of NMI was above 1.81 and that of NCC was above 0.99 in all DIRalgorithms. As the degree of deformation was increased, the degree of image similarity decreased.When the V ref was increased or decreased by about 12%, the difference between V ref and V id waswithin ±5% regardless of the type of deformation, the deformation was classified into two types:deformation 1 increased the V ref (relaxation) and deformation 2 decreased the V ref (contraction).The value of the Dice Similarity Coefficient (DSC) was above 0.95 in deformation 1 except for theMD algorithm. In the case of deformation 2, the value of the DSC was above 0.95 in all DIR algorithms.The I def and the V def were not completely restored to I ref and V ref , and the accuracy ofthe DIR algorithms were different, depending on the degree of deformation. Hence, the performanceof DIR algorithms should be verified for the desired applications

  20. Designing a composite correlation filter based on iterative optimization of training images for distortion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Elbouz, M.; Alfalou, A.; Brosseau, C.

    2017-06-01

    We present a novel method to optimize the discrimination ability and noise robustness of composite filters. This method is based on the iterative preprocessing of training images which can extract boundary and detailed feature information of authentic training faces, thereby improving the peak-to-correlation energy (PCE) ratio of authentic faces and to be immune to intra-class variance and noise interference. By adding the training images directly, one can obtain a composite template with high discrimination ability and robustness for face recognition task. The proposed composite correlation filter does not involve any complicated mathematical analysis and computation which are often required in the design of correlation algorithms. Simulation tests have been conducted to check the effectiveness and feasibility of our proposal. Moreover, to assess robustness of composite filters using receiver operating characteristic (ROC) curves, we devise a new method to count the true positive and false positive rates for which the difference between PCE and threshold is involved.

  1. Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.

    1990-01-01

    A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.

  2. Comparison of remote sensing algorithms for retrieval of suspended particulate matter concentration from reflectance in coastal waters

    NASA Astrophysics Data System (ADS)

    Freeman, Lauren A.; Ackleson, Steven G.; Rhea, William Joseph

    2017-10-01

    Suspended particulate matter (SPM) is a key environmental indicator for rivers, estuaries, and coastal waters, which can be calculated from remote sensing reflectance obtained by an airborne or satellite imager. Here, algorithms from prior studies are applied to a dataset of in-situ at surface hyperspectral remote sensing reflectance, collected in three geographic regions representing different water types. These data show the optically inherent exponential nature of the relationship between reflectance and sediment concentration. However, linear models are also shown to provide a reasonable estimate of sediment concentration when utilized with care in similar conditions to those under which the algorithms were developed, particularly at lower SPM values (0 to 20 mg/L). Fifteen published SPM algorithms are tested, returning strong correlations of R2>0.7, and in most cases, R2>0.8. Very low SPM values show weaker correlation with algorithm calculated SPM that is not wavelength dependent. None of the tested algorithms performs well for high SPM values (>30 mg/L), with most algorithms underestimating SPM. A shift toward a smaller number of simple exponential or linear models relating satellite remote sensing reflectance to suspended sediment concentration with regional consideration will greatly aid larger spatiotemporal studies of suspended sediment trends.

  3. Dual energy x-ray imaging and scoring of coronary calcium: physics-based digital phantom and clinical studies

    NASA Astrophysics Data System (ADS)

    Zhou, Bo; Wen, Di; Nye, Katelyn; Gilkeson, Robert C.; Wilson, David L.

    2016-03-01

    Coronary artery calcification (CAC) as assessed with CT calcium score is the best biomarker of coronary artery disease. Dual energy x-ray provides an inexpensive, low radiation-dose alternative. A two shot system (GE Revolution-XRd) is used, raw images are processed with a custom algorithm, and a coronary calcium image (DECCI) is created, similar to the bone image, but optimized for CAC visualization, not lung visualization. In this report, we developed a physicsbased, digital-phantom containing heart, lung, CAC, spine, ribs, pulmonary artery, and adipose elements, examined effects on DECCI, suggested physics-inspired algorithms to improve CAC contrast, and evaluated the correlation between CT calcium scores and a proposed DE calcium score. In simulation experiment, Beam hardening from increasing adipose thickness (2cm to 8cm) reduced Cg by 19% and 27% in 120kVp and 60kVp images, but only reduced Cg by <7% in DECCI. If a pulmonary artery moves or pulsates with blood filling between exposures, it can give rise to a significantly confounding PA signal in DECCI similar in amplitude to CAC. Observations suggest modifications to DECCI processing, which can further improve CAC contrast by a factor of 2 in clinical exams. The DE score had the best correlation with "CT mass score" among three commonly used CT scores. Results suggest that DE x-ray is a promising tool for imaging and scoring CAC, and there still remains opportunity for further DECCI processing improvements.

  4. Classification Features of US Images Liver Extracted with Co-occurrence Matrix Using the Nearest Neighbor Algorithm

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita; Nicolae, Mariana Carmen

    2011-12-01

    Co-occurrence matrix has been applied successfully for echographic images characterization because it contains information about spatial distribution of grey-scale levels in an image. The paper deals with the analysis of pixels in selected regions of interest of an US image of the liver. The useful information obtained refers to texture features such as entropy, contrast, dissimilarity and correlation extract with co-occurrence matrix. The analyzed US images were grouped in two distinct sets: healthy liver and steatosis (or fatty) liver. These two sets of echographic images of the liver build a database that includes only histological confirmed cases: 10 images of healthy liver and 10 images of steatosis liver. The healthy subjects help to compute four textural indices and as well as control dataset. We chose to study these diseases because the steatosis is the abnormal retention of lipids in cells. The texture features are statistical measures and they can be used to characterize irregularity of tissues. The goal is to extract the information using the Nearest Neighbor classification algorithm. The K-NN algorithm is a powerful tool to classify features textures by means of grouping in a training set using healthy liver, on the one hand, and in a holdout set using the features textures of steatosis liver, on the other hand. The results could be used to quantify the texture information and will allow a clear detection between health and steatosis liver.

  5. Parallelization and Algorithmic Enhancements of High Resolution IRAS Image Construction

    NASA Technical Reports Server (NTRS)

    Cao, Yu; Prince, Thomas A.; Tereby, Susan; Beichman, Charles A.

    1996-01-01

    The Infrared Astronomical Satellite caried out a nearly complete survey of the infrared sky, and the survey data are important for the study of many astrophysical phenomena. However, many data sets at other wavelengths have higher resolutions than that of the co-added IRAS maps, and high resolution IRAS images are strongly desired both for their own information content and their usefulness in correlation. The HIRES program was developed by the Infrared Processing and Analysis Center (IPAC) to produce high resolution (approx. 1') images from IRAS data using the Maximum Correlation Method (MCM). We describe the port of HIRES to the Intel Paragon, a massively parallel supercomputer, other software developments for mass production of HIRES images, and the IRAS Galaxy Atlas, a project to map the Galactic plane at 60 and 100(micro)m.

  6. A feed-forward Hopfield neural network algorithm (FHNNA) with a colour satellite image for water quality mapping

    NASA Astrophysics Data System (ADS)

    Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar

    2016-06-01

    There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.

  7. SSM/I Rainfall Volume Correlated with Deepening Rate in Extratropical Cyclones

    NASA Technical Reports Server (NTRS)

    Petty, Grant W.; Miller, Douglas K.

    1994-01-01

    With the emergence of reasonably robust, physically based rain rate algorithms designed for the Special Sensor Microwave/Imager (SSM/I), a unique opportunity exists to directly observe a physical component which can contribute to or be a signature of cyclone deepening (latent heat release). The emphasis of the research in this paper is to seek systematic differences in rain rate observed by the SSM/I, using the algorithm of Petty in cases of explosive and nonexplosive cyclone deepening.

  8. Using phase information to enhance speckle noise reduction in the ultrasonic NDE of coarse grain materials

    NASA Astrophysics Data System (ADS)

    Lardner, Timothy; Li, Minghui; Gachagan, Anthony

    2014-02-01

    Materials with a coarse grain structure are becoming increasingly prevalent in industry due to their resilience to stress and corrosion. These materials are difficult to inspect with ultrasound because reflections from the grains lead to high noise levels which hinder the echoes of interest. Spatially Averaged Sub-Aperture Correlation Imaging (SASACI) is an advanced array beamforming technique that uses the cross-correlation between images from array sub-apertures to generate an image weighting matrix, in order to reduce noise levels. This paper presents a method inspired by SASACI to further improve imaging using phase information to refine focusing and reduce noise. A-scans from adjacent array elements are cross-correlated using both signal amplitude and phase to refine delay laws and minimize phase aberration. The phase-based and amplitude-based corrected images are used as inputs to a two-dimensional cross-correlation algorithm that will output a weighting matrix that can be applied to any conventional image. This approach was validated experimentally using a 5MHz array a coarse grained Inconel 625 step wedge, and compared to the Total Focusing Method (TFM). Initial results have seen SNR improvements of over 20dB compared to TFM, and a resolution that is much higher.

  9. LCC-Demons: a robust and accurate symmetric diffeomorphic registration algorithm.

    PubMed

    Lorenzi, M; Ayache, N; Frisoni, G B; Pennec, X

    2013-11-01

    Non-linear registration is a key instrument for computational anatomy to study the morphology of organs and tissues. However, in order to be an effective instrument for the clinical practice, registration algorithms must be computationally efficient, accurate and most importantly robust to the multiple biases affecting medical images. In this work we propose a fast and robust registration framework based on the log-Demons diffeomorphic registration algorithm. The transformation is parameterized by stationary velocity fields (SVFs), and the similarity metric implements a symmetric local correlation coefficient (LCC). Moreover, we show how the SVF setting provides a stable and consistent numerical scheme for the computation of the Jacobian determinant and the flux of the deformation across the boundaries of a given region. Thus, it provides a robust evaluation of spatial changes. We tested the LCC-Demons in the inter-subject registration setting, by comparing with state-of-the-art registration algorithms on public available datasets, and in the intra-subject longitudinal registration problem, for the statistically powered measurements of the longitudinal atrophy in Alzheimer's disease. Experimental results show that LCC-Demons is a generic, flexible, efficient and robust algorithm for the accurate non-linear registration of images, which can find several applications in the field of medical imaging. Without any additional optimization, it solves equally well intra & inter-subject registration problems, and compares favorably to state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Non-rigid registration between 3D ultrasound and CT images of the liver based on intensity and gradient information

    NASA Astrophysics Data System (ADS)

    Lee, Duhgoon; Nam, Woo Hyun; Lee, Jae Young; Ra, Jong Beom

    2011-01-01

    In order to utilize both ultrasound (US) and computed tomography (CT) images of the liver concurrently for medical applications such as diagnosis and image-guided intervention, non-rigid registration between these two types of images is an essential step, as local deformation between US and CT images exists due to the different respiratory phases involved and due to the probe pressure that occurs in US imaging. This paper introduces a voxel-based non-rigid registration algorithm between the 3D B-mode US and CT images of the liver. In the proposed algorithm, to improve the registration accuracy, we utilize the surface information of the liver and gallbladder in addition to the information of the vessels inside the liver. For an effective correlation between US and CT images, we treat those anatomical regions separately according to their characteristics in US and CT images. Based on a novel objective function using a 3D joint histogram of the intensity and gradient information, vessel-based non-rigid registration is followed by surface-based non-rigid registration in sequence, which improves the registration accuracy. The proposed algorithm is tested for ten clinical datasets and quantitative evaluations are conducted. Experimental results show that the registration error between anatomical features of US and CT images is less than 2 mm on average, even with local deformation due to different respiratory phases and probe pressure. In addition, the lesion registration error is less than 3 mm on average with a maximum of 4.5 mm that is considered acceptable for clinical applications.

  11. Small maritime target detection through false color fusion

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Wu, Tirui

    2008-04-01

    We present an algorithm that produces a fused false color representation of a combined multiband IR and visual imaging system for maritime applications. Multispectral IR imaging techniques are increasingly deployed in maritime operations, to detect floating mines or to find small dinghies and swimmers during search and rescue operations. However, maritime backgrounds usually contain a large amount of clutter that severely hampers the detection of small targets. Our new algorithm deploys the correlation between the target signatures in two different IR frequency bands (3-5 and 8-12 μm) to construct a fused IR image with a reduced amount of clutter. The fused IR image is then combined with a visual image in a false color RGB representation for display to a human operator. The algorithm works as follows. First, both individual IR bands are filtered with a morphological opening top-hat transform to extract small details. Second, a common image is extracted from the two filtered IR bands, and assigned to the red channel of an RGB image. Regions of interest that appear in both IR bands remain in this common image, while most uncorrelated noise details are filtered out. Third, the visual band is assigned to the green channel and, after multiplication with a constant (typically 1.6) also to the blue channel. Fourth, the brightness and colors of this intermediate false color image are renormalized by adjusting its first order statistics to those of a representative reference scene. The result of these four steps is a fused color image, with naturalistic colors (bluish sky and grayish water), in which small targets are clearly visible.

  12. Improving best-phase image quality in cardiac CT by motion correction with MAM optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohkohl, Christopher; Bruder, Herbert; Stierstorfer, Karl

    2013-03-15

    Purpose: Research in image reconstruction for cardiac CT aims at using motion correction algorithms to improve the image quality of the coronary arteries. The key to those algorithms is motion estimation, which is currently based on 3-D/3-D registration to align the structures of interest in images acquired in multiple heart phases. The need for an extended scan data range covering several heart phases is critical in terms of radiation dose to the patient and limits the clinical potential of the method. Furthermore, literature reports only slight quality improvements of the motion corrected images when compared to the most quiet phasemore » (best-phase) that was actually used for motion estimation. In this paper a motion estimation algorithm is proposed which does not require an extended scan range but works with a short scan data interval, and which markedly improves the best-phase image quality. Methods: Motion estimation is based on the definition of motion artifact metrics (MAM) to quantify motion artifacts in a 3-D reconstructed image volume. The authors use two different MAMs, entropy, and positivity. By adjusting the motion field parameters, the MAM of the resulting motion-compensated reconstruction is optimized using a gradient descent procedure. In this way motion artifacts are minimized. For a fast and practical implementation, only analytical methods are used for motion estimation and compensation. Both the MAM-optimization and a 3-D/3-D registration-based motion estimation algorithm were investigated by means of a computer-simulated vessel with a cardiac motion profile. Image quality was evaluated using normalized cross-correlation (NCC) with the ground truth template and root-mean-square deviation (RMSD). Four coronary CT angiography patient cases were reconstructed to evaluate the clinical performance of the proposed method. Results: For the MAM-approach, the best-phase image quality could be improved for all investigated heart phases, with a maximum improvement of the NCC value by 100% and of the RMSD value by 81%. The corresponding maximum improvements for the registration-based approach were 20% and 40%. In phases with very rapid motion the registration-based algorithm obtained better image quality, while the image quality of the MAM algorithm was superior in phases with less motion. The image quality improvement of the MAM optimization was visually confirmed for the different clinical cases. Conclusions: The proposed method allows a software-based best-phase image quality improvement in coronary CT angiography. A short scan data interval at the target heart phase is sufficient, no additional scan data in other cardiac phases are required. The algorithm is therefore directly applicable to any standard cardiac CT acquisition protocol.« less

  13. Image analysis approach for development of a decision support system for detection of malaria parasites in thin blood smear images.

    PubMed

    Prasad, Keerthana; Winter, Jan; Bhat, Udayakrishna M; Acharya, Raviraja V; Prabhu, Gopalakrishna K

    2012-08-01

    This paper describes development of a decision support system for diagnosis of malaria using color image analysis. A hematologist has to study around 100 to 300 microscopic views of Giemsa-stained thin blood smear images to detect malaria parasites, evaluate the extent of infection and to identify the species of the parasite. The proposed algorithm picks up the suspicious regions and detects the parasites in images of all the views. The subimages representing all these parasites are put together to form a composite image which can be sent over a communication channel to obtain the opinion of a remote expert for accurate diagnosis and treatment. We demonstrate the use of the proposed technique for use as a decision support system by developing an android application which facilitates the communication with a remote expert for the final confirmation on the decision for treatment of malaria. Our algorithm detects around 96% of the parasites with a false positive rate of 20%. The Spearman correlation r was 0.88 with a confidence interval of 0.838 to 0.923, p<0.0001.

  14. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  15. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  16. Ex vivo brain tumor analysis using spectroscopic optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lenz, Marcel; Krug, Robin; Welp, Hubert; Schmieder, Kirsten; Hofmann, Martin R.

    2016-03-01

    A big challenge during neurosurgeries is to distinguish between healthy tissue and cancerous tissue, but currently a suitable non-invasive real time imaging modality is not available. Optical Coherence Tomography (OCT) is a potential technique for such a modality. OCT has a penetration depth of 1-2 mm and a resolution of 1-15 μm which is sufficient to illustrate structural differences between healthy tissue and brain tumor. Therefore, we investigated gray and white matter of healthy central nervous system and meningioma samples with a Spectral Domain OCT System (Thorlabs Callisto). Additional OCT images were generated after paraffin embedding and after the samples were cut into 10 μm thin slices for histological investigation with a bright field microscope. All samples were stained with Hematoxylin and Eosin. In all cases B-scans and 3D images were made. Furthermore, a camera image of the investigated area was made by the built-in video camera of our OCT system. For orientation, the backsides of all samples were marked with blue ink. The structural differences between healthy tissue and meningioma samples were most pronounced directly after removal. After paraffin embedding these differences diminished. A correlation between OCT en face images and microscopy images can be seen. In order to increase contrast, post processing algorithms were applied. Hence we employed Spectroscopic OCT, pattern recognition algorithms and machine learning algorithms such as k-means Clustering and Principal Component Analysis.

  17. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143

  18. Comparative analysis of methods for extracting vessel network on breast MRI images

    NASA Astrophysics Data System (ADS)

    Gaizer, Bence T.; Vassiou, Katerina G.; Lavdas, Eleftherios; Arvanitis, Dimitrios L.; Fezoulidis, Ioannis V.; Glotsos, Dimitris T.

    2017-11-01

    Digital processing of MRI images aims to provide an automatized diagnostic evaluation of regular health screenings. Cancerous lesions are proven to cause an alteration in the vessel structure of the diseased organ. Currently there are several methods used for extraction of the vessel network in order to quantify its properties. In this work MRI images (Signa HDx 3.0T, GE Healthcare, courtesy of University Hospital of Larissa) of 30 female breasts were subjected to three different vessel extraction algorithms to determine the location of their vascular network. The first method is an experiment to build a graph over known points of the vessel network; the second algorithm aims to determine the direction and diameter of vessels at these points; the third approach is a seed growing algorithm, spreading selection to neighbors of the known vessel pixels. The possibilities shown by the different methods were analyzed, and quantitative measurements were performed. The data provided by these measurements showed no clear correlation with the presence or malignancy of tumors, based on the radiological diagnosis of skilled physicians.

  19. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reyhan, M; Yue, N

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less

  20. Marine Targets Classification in PolInSAR Data

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Jingsong; Ren, Lin

    2014-11-01

    In this paper, marine stationary targets and moving targets are studied by Pol-In-SAR data of Radarsat-2. A new method of stationary targets detection is proposed. The method get the correlation coefficient image of the In-SAR data, and using the histogram of correlation coefficient image. Then, A Constant False Alarm Rate (CFAR) algorithm and The Probabilistic Neural Network model are imported to detect stationary targets. To find the moving targets, Azimuth Ambiguity is show as an important feature. We use the length of azimuth ambiguity to get the target's moving direction and speed. Make further efforts, Targets classification is studied by rebuild the surface elevation of marine targets.

  1. Marine Targets Classification in PolInSAR Data

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Jingsong; Ren, Lin

    2014-11-01

    In this paper, marine stationary targets and moving targets are studied by Pol-In-SAR data of Radarsat-2. A new method of stationary targets detection is proposed. The method get the correlation coefficient image of the In-SAR data, and using the histogram of correlation coefficient image. Then , A Constant False Alarm Rate (CFAR) algorithm and The Probabilistic Neural Network model are imported to detect stationary targets. To find the moving targets, Azimuth Ambiguity is show as an important feature. We use the length of azimuth ambiguity to get the target's moving direction and speed. Make further efforts, Targets classification is studied by rebuild the surface elevation of marine targets.

  2. Fuzzy geometry, entropy, and image information

    NASA Technical Reports Server (NTRS)

    Pal, Sankar K.

    1991-01-01

    Presented here are various uncertainty measures arising from grayness ambiguity and spatial ambiguity in an image, and their possible applications as image information measures. Definitions are given of an image in the light of fuzzy set theory, and of information measures and tools relevant for processing/analysis e.g., fuzzy geometrical properties, correlation, bound functions and entropy measures. Also given is a formulation of algorithms along with management of uncertainties for segmentation and object extraction, and edge detection. The output obtained here is both fuzzy and nonfuzzy. Ambiguity in evaluation and assessment of membership function are also described.

  3. Cross-correlation least-squares reverse time migration in the pseudo-time domain

    NASA Astrophysics Data System (ADS)

    Li, Qingyang; Huang, Jianping; Li, Zhenchun

    2017-08-01

    The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.

  4. Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.

    2018-04-01

    Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.

  5. Fast gradient-based algorithm on extended landscapes for wave-front reconstruction of Earth observation satellite

    NASA Astrophysics Data System (ADS)

    Thiebaut, C.; Perraud, L.; Delvit, J. M.; Latry, C.

    2016-07-01

    We present an on-board satellite implementation of a gradient-based (optical flows) algorithm for the shifts estimation between images of a Shack-Hartmann wave-front sensor on extended landscapes. The proposed algorithm has low complexity in comparison with classical correlation methods which is a big advantage for being used on-board a satellite at high instrument data rate and in real-time. The electronic board used for this implementation is designed for space applications and is composed of radiation-hardened software and hardware. Processing times of both shift estimations and pre-processing steps are compatible of on-board real-time computation.

  6. Interpreting CARS images of tissue within the C-H-stretching region

    NASA Astrophysics Data System (ADS)

    Dietzek, Benjamin; Meyer, Tobias; Medyukhina, Anna; Bergner, Norbert; Krafft, Christoph; Romeike, Bernd F. M.; Reichart, Rupert; Kalff, Rolf; Schmitt, Michael; Popp, Jürgen

    2014-03-01

    Single band coherent anti-Stokes Raman scattering (CARS) microscopy within the CH-stretching region is applied to detect individual cells and nuclei of human brain tissue and brain tumors - an information which allows for histopathologic grading of the tissue. The CARS image contrast within the C-H-stretching region correlated to the tissue composition. Based on the specific application example of identifying nuclei within (coherent) Raman images of neurotissue sections, we shall derive general design parameters for lasers optimally suited to serve in a clinical environment and discuss the potential of recently developed methods to analyze spectrally resolved CARS images and image segmentation algorithms.

  7. Image compression using quad-tree coding with morphological dilation

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Jiang, Weiwei; Jiao, Licheng; Wang, Lei

    2007-11-01

    In this paper, we propose a new algorithm which integrates morphological dilation operation to quad-tree coding, the purpose of doing this is to compensate each other's drawback by using quad-tree coding and morphological dilation operation respectively. New algorithm can not only quickly find the seed significant coefficient of dilation but also break the limit of block boundary of quad-tree coding. We also make a full use of both within-subband and cross-subband correlation to avoid the expensive cost of representing insignificant coefficients. Experimental results show that our algorithm outperforms SPECK and SPIHT. Without using any arithmetic coding, our algorithm can achieve good performance with low computational cost and it's more suitable to mobile devices or scenarios with a strict real-time requirement.

  8. CHARACTERIZATION OF THE COMPLETE FIBER NETWORK TOPOLOGY OF PLANAR FIBROUS TISSUES AND SCAFFOLDS

    PubMed Central

    D'Amore, Antonio; Stella, John A.; Wagner, William R.; Sacks, Michael S.

    2010-01-01

    Understanding how engineered tissue scaffold architecture affects cell morphology, metabolism, phenotypic expression, as well as predicting material mechanical behavior have recently received increased attention. In the present study, an image-based analysis approach that provides an automated tool to characterize engineered tissue fiber network topology is presented. Micro-architectural features that fully defined fiber network topology were detected and quantified, which include fiber orientation, connectivity, intersection spatial density, and diameter. Algorithm performance was tested using scanning electron microscopy (SEM) images of electrospun poly(ester urethane)urea (ES-PEUU) scaffolds. SEM images of rabbit mesenchymal stem cell (MSC) seeded collagen gel scaffolds and decellularized rat carotid arteries were also analyzed to further evaluate the ability of the algorithm to capture fiber network morphology regardless of scaffold type and the evaluated size scale. The image analysis procedure was validated qualitatively and quantitatively, comparing fiber network topology manually detected by human operators (n=5) with that automatically detected by the algorithm. Correlation values between manual detected and algorithm detected results for the fiber angle distribution and for the fiber connectivity distribution were 0.86 and 0.93 respectively. Algorithm detected fiber intersections and fiber diameter values were comparable (within the mean ± standard deviation) with those detected by human operators. This automated approach identifies and quantifies fiber network morphology as demonstrated for three relevant scaffold types and provides a means to: (1) guarantee objectivity, (2) significantly reduce analysis time, and (3) potentiate broader analysis of scaffold architecture effects on cell behavior and tissue development both in vitro and in vivo. PMID:20398930

  9. A cross correlation method for chemical profiles in minerals, with an application to zircons of the Kilgore Tuff (USA)

    NASA Astrophysics Data System (ADS)

    Probst, L. C.; Sheldrake, T. E.; Gander, M. J.; Wallace, G.; Simpson, G.; Caricchi, L.

    2018-03-01

    Magmatic crystals are characterised by chemical zonation patterns that reflect the thermal and chemical conditions within magma reservoirs in which they grew. Crystals that exhibit similar patterns of zonation are often interpreted to have experienced similar conditions of growth. These patterns of zonation may represent continuous processes such as cooling, or more instantaneous events such as magma injection, and provide an insight into the structure and evolution of a magmatic system, both temporally and spatially. We have developed an algorithm that is objectively able to quantify the similarity within and between suites of magmatic crystals from different samples. Significantly, the algorithm is able to identify correlation that occurs between the interiors of two crystals, but does not extend to the rim, which provides an opportunity to understand the long-term evolution of magmatic systems. We develop and explain the mathematical basis for our algorithm and introduce its application using cathodoluminescence images of zircons from the Kilgore Tuff (USA). The results allow us to correlate samples from two different outcrops that are found over 80 km apart.

  10. A multichannel block-matching denoising algorithm for spectral photon-counting CT images.

    PubMed

    Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J

    2017-06-01

    We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  11. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    PubMed

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  12. SEMG signal compression based on two-dimensional techniques.

    PubMed

    de Melo, Wheidima Carneiro; de Lima Filho, Eddie Batista; da Silva Júnior, Waldir Sabino

    2016-04-18

    Recently, two-dimensional techniques have been successfully employed for compressing surface electromyographic (SEMG) records as images, through the use of image and video encoders. Such schemes usually provide specific compressors, which are tuned for SEMG data, or employ preprocessing techniques, before the two-dimensional encoding procedure, in order to provide a suitable data organization, whose correlations can be better exploited by off-the-shelf encoders. Besides preprocessing input matrices, one may also depart from those approaches and employ an adaptive framework, which is able to directly tackle SEMG signals reassembled as images. This paper proposes a new two-dimensional approach for SEMG signal compression, which is based on a recurrent pattern matching algorithm called multidimensional multiscale parser (MMP). The mentioned encoder was modified, in order to efficiently work with SEMG signals and exploit their inherent redundancies. Moreover, a new preprocessing technique, named as segmentation by similarity (SbS), which has the potential to enhance the exploitation of intra- and intersegment correlations, is introduced, the percentage difference sorting (PDS) algorithm is employed, with different image compressors, and results with the high efficiency video coding (HEVC), H.264/AVC, and JPEG2000 encoders are presented. Experiments were carried out with real isometric and dynamic records, acquired in laboratory. Dynamic signals compressed with H.264/AVC and HEVC, when combined with preprocessing techniques, resulted in good percent root-mean-square difference [Formula: see text] compression factor figures, for low and high compression factors, respectively. Besides, regarding isometric signals, the modified two-dimensional MMP algorithm outperformed state-of-the-art schemes, for low compression factors, the combination between SbS and HEVC proved to be competitive, for high compression factors, and JPEG2000, combined with PDS, provided good performance allied to low computational complexity, all in terms of percent root-mean-square difference [Formula: see text] compression factor. The proposed schemes are effective and, specifically, the modified MMP algorithm can be considered as an interesting alternative for isometric signals, regarding traditional SEMG encoders. Besides, the approach based on off-the-shelf image encoders has the potential of fast implementation and dissemination, given that many embedded systems may already have such encoders available, in the underlying hardware/software architecture.

  13. Feasibility of single-beat full-volume capture real-time three-dimensional echocardiography and auto-contouring algorithm for quantification of left ventricular volume: validation with cardiac magnetic resonance imaging.

    PubMed

    Chang, Sung-A; Lee, Sang-Chol; Kim, Eun-Young; Hahm, Seung-Hee; Jang, Shin Yi; Park, Sung-Ji; Choi, Jin-Oh; Park, Seung Woo; Choe, Yeon Hyeon; Oh, Jae K

    2011-08-01

    With recent developments in echocardiographic technology, a new system using real-time three-dimensional echocardiography (RT3DE) that allows single-beat acquisition of the entire volume of the left ventricle and incorporates algorithms for automated border detection has been introduced. Provided that these techniques are acceptably reliable, three-dimensional echocardiography may be much more useful for clinical practice. The aim of this study was to evaluate the feasibility and accuracy of left ventricular (LV) volume measurements by RT3DE using the single-beat full-volume capture technique. One hundred nine consecutive patients scheduled for cardiac magnetic resonance imaging and RT3DE using the single-beat full-volume capture technique on the same day were recruited. LV end-systolic volume, end-diastolic volume, and ejection fraction were measured using an auto-contouring algorithm from data acquired on RT3DE. The data were compared with the same measurements obtained using cardiac magnetic resonance imaging. Volume measurements on RT3DE with single-beat full-volume capture were feasible in 84% of patients. Both interobserver and intraobserver variability of three-dimensional measurements of end-systolic and end-diastolic volumes showed excellent agreement. Pearson's correlation analysis showed a close correlation of end-systolic and end-diastolic volumes between RT3DE and cardiac magnetic resonance imaging (r = 0.94 and r = 0.91, respectively, P < .0001 for both). Bland-Altman analysis showed reasonable limits of agreement. After application of the auto-contouring algorithm, the rate of successful auto-contouring (cases requiring minimal manual corrections) was <50%. RT3DE using single-beat full-volume capture is an easy and reliable technique to assess LV volume and systolic function in clinical practice. However, the image quality and low frame rate still limit its application for dilated left ventricles, and the automated volume analysis program needs more development to make it clinically efficacious. Copyright © 2011 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.

  14. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  15. Hybrid Correlation Algorithms. A Bridge Between Feature Matching and Image Correlation,

    DTIC Science & Technology

    1979-11-01

    spa- tially into groups of pixels. The intensity level preprocessing is designed to compensate for any biases or gain changes in the system ; whereas...number of error sources that affect the performance of the system . It would be desirable to lump these errors into ge- neric categories in discussing... system performance rather than treat- ing each error source separately. Such a generic categorization should possess the following properties: 1. The

  16. Predicting detection performance with model observers: Fourier domain or spatial domain?

    PubMed

    Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia

    2016-02-27

    The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images.

  17. Predicting detection performance with model observers: Fourier domain or spatial domain?

    PubMed Central

    Chen, Baiyu; Yu, Lifeng; Leng, Shuai; Kofler, James; Favazza, Christopher; Vrieze, Thomas; McCollough, Cynthia

    2016-01-01

    The use of Fourier domain model observer is challenged by iterative reconstruction (IR), because IR algorithms are nonlinear and IR images have noise texture different from that of FBP. A modified Fourier domain model observer, which incorporates nonlinear noise and resolution properties, has been proposed for IR and needs to be validated with human detection performance. On the other hand, the spatial domain model observer is theoretically applicable to IR, but more computationally intensive than the Fourier domain method. The purpose of this study is to compare the modified Fourier domain model observer to the spatial domain model observer with both FBP and IR images, using human detection performance as the gold standard. A phantom with inserts of various low contrast levels and sizes was repeatedly scanned 100 times on a third-generation, dual-source CT scanner at 5 dose levels and reconstructed using FBP and IR algorithms. The human detection performance of the inserts was measured via a 2-alternative-forced-choice (2AFC) test. In addition, two model observer performances were calculated, including a Fourier domain non-prewhitening model observer and a spatial domain channelized Hotelling observer. The performance of these two mode observers was compared in terms of how well they correlated with human observer performance. Our results demonstrated that the spatial domain model observer correlated well with human observers across various dose levels, object contrast levels, and object sizes. The Fourier domain observer correlated well with human observers using FBP images, but overestimated the detection performance using IR images. PMID:27239086

  18. Image registration for a UV-Visible dual-band imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  19. Object recognition of real targets using modelled SAR images

    NASA Astrophysics Data System (ADS)

    Zherdev, D. A.

    2017-12-01

    In this work the problem of recognition is studied using SAR images. The algorithm of recognition is based on the computation of conjugation indices with vectors of class. The support subspaces for each class are constructed by exception of the most and the less correlated vectors in a class. In the study we examine the ability of a significant feature vector size reduce that leads to recognition time decrease. The images of targets form the feature vectors that are transformed using pre-trained convolutional neural network (CNN).

  20. PathBot: A Radiology-Pathology Correlation Dashboard.

    PubMed

    Kelahan, Linda C; Kalaria, Amit D; Filice, Ross W

    2017-12-01

    Pathology is considered the "gold standard" of diagnostic medicine. The importance of radiology-pathology correlation is seen in interdepartmental patient conferences such as "tumor boards" and by the tradition of radiology resident immersion in a radiologic-pathology course at the American Institute of Radiologic Pathology. In practice, consistent pathology follow-up can be difficult due to time constraints and cumbersome electronic medical records. We present a radiology-pathology correlation dashboard that presents radiologists with pathology reports matched to their dictations, for both diagnostic imaging and image-guided procedures. In creating our dashboard, we utilized the RadLex ontology and National Center for Biomedical Ontology (NCBO) Annotator to identify anatomic concepts in pathology reports that could subsequently be mapped to relevant radiology reports, providing an automated method to match related radiology and pathology reports. Radiology-pathology matches are presented to the radiologist on a web-based dashboard. We found that our algorithm was highly specific in detecting matches. Our sensitivity was slightly lower than expected and could be attributed to missing anatomy concepts in the RadLex ontology, as well as limitations in our parent term hierarchical mapping and synonym recognition algorithms. By automating radiology-pathology correlation and presenting matches in a user-friendly dashboard format, we hope to encourage pathology follow-up in clinical radiology practice for purposes of self-education and to augment peer review. We also hope to provide a tool to facilitate the production of quality teaching files, lectures, and publications. Diagnostic images have a richer educational value when they are backed up by the gold standard of pathology.

  1. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  2. Fully automatic segmentation of arbitrarily shaped fiducial markers in cone-beam CT projections

    NASA Astrophysics Data System (ADS)

    Bertholet, J.; Wan, H.; Toftegaard, J.; Schmidt, M. L.; Chotard, F.; Parikh, P. J.; Poulsen, P. R.

    2017-02-01

    Radio-opaque fiducial markers of different shapes are often implanted in or near abdominal or thoracic tumors to act as surrogates for the tumor position during radiotherapy. They can be used for real-time treatment adaptation, but this requires a robust, automatic segmentation method able to handle arbitrarily shaped markers in a rotational imaging geometry such as cone-beam computed tomography (CBCT) projection images and intra-treatment images. In this study, we propose a fully automatic dynamic programming (DP) assisted template-based (TB) segmentation method. Based on an initial DP segmentation, the DPTB algorithm generates and uses a 3D marker model to create 2D templates at any projection angle. The 2D templates are used to segment the marker position as the position with highest normalized cross-correlation in a search area centered at the DP segmented position. The accuracy of the DP algorithm and the new DPTB algorithm was quantified as the 2D segmentation error (pixels) compared to a manual ground truth segmentation for 97 markers in the projection images of CBCT scans of 40 patients. Also the fraction of wrong segmentations, defined as 2D errors larger than 5 pixels, was calculated. The mean 2D segmentation error of DP was reduced from 4.1 pixels to 3.0 pixels by DPTB, while the fraction of wrong segmentations was reduced from 17.4% to 6.8%. DPTB allowed rejection of uncertain segmentations as deemed by a low normalized cross-correlation coefficient and contrast-to-noise ratio. For a rejection rate of 9.97%, the sensitivity in detecting wrong segmentations was 67% and the specificity was 94%. The accepted segmentations had a mean segmentation error of 1.8 pixels and 2.5% wrong segmentations.

  3. Pilot study of an automated method to determine Melasma Area and Severity Index.

    PubMed

    Tay, E Y; Gan, E Y; Tan, V W D; Lin, Z; Liang, Y; Lin, F; Wee, S; Thng, T G

    2015-06-01

    Objective outcome measures for melasma severity are essential for the evaluation of severity as well as results of treatment. The modified Melasma Area and Severity Index (mMASI) score is a validated tool for assessing melasma severity but is often subject to inter-observer variability. To develop and validate a novel image analysis software designed to automatically calculate the area and degree of hyperpigmentation in melasma from computer image analysis of whole-face digital photographs, thereby deriving an automated mMASI score (aMASI). The algorithm was developed in collaboration between dermatologists and image analysis experts. Firstly, using an adaptive threshold method, the algorithm identifies, segments and calculates the areas involved. It then calculates the darkness. Finally, the derived area and darkness are then used to calculate mMASI. The scores derived from the algorithm are validated prospectively. Twenty-nine patients with melasma using depigmenting agents were recruited for validation. Three dermatologists scored mMASI at baseline and post-treatment using standardized photographs. These scores were compared with aMASI scores derived from computer analysis. aMASI scores correlated well with clinical mMASI pre-treatment (r = 0·735, P < 0·001) and post-treatment (r = 0·608, P < 0·001). aMASI was reliable in detecting changes with treatment. These changes in aMASI scores correlated well with changes in clinician-assessed mMASI (r = 0·622, P < 0·001). This study proposes a novel approach in melasma scoring using digital image analysis. It holds promise as a tool that would enable clinicians worldwide to standardize melasma severity scoring and outcome measures in an easy and reproducible manner, enabling different treatment options to be compared accurately. © 2015 British Association of Dermatologists.

  4. Toward an Objective Enhanced-V Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Moses, John F.; Brunner,Jason C.; Feltz, Wayne F.; Ackerman, Steven A.; Moses, John F.; Rabin, Robert M.

    2007-01-01

    The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V signature, has been observed to occur during and preceding severe weather. This study describes an algorithmic approach to objectively detect overshooting tops, temperature couplets, and enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of temperature, temperature difference, and distance thresholds for the overshooting top and temperature couplet detection parts of the algorithm and consists of cross correlation statistics of pixels for the enhanced-V detection part of the algorithm. The effectiveness of the overshooting top and temperature couplet detection components of the algorithm is examined using GOES and MODIS image data for case studies in the 2003-2006 seasons. The main goal is for the algorithm to be useful for operations with future sensors, such as GOES-R.

  5. Quantitative analysis of airway abnormalities in CT

    NASA Astrophysics Data System (ADS)

    Petersen, Jens; Lo, Pechin; Nielsen, Mads; Edula, Goutham; Ashraf, Haseem; Dirksen, Asger; de Bruijne, Marleen

    2010-03-01

    A coupled surface graph cut algorithm for airway wall segmentation from Computed Tomography (CT) images is presented. Using cost functions that highlight both inner and outer wall borders, the method combines the search for both borders into one graph cut. The proposed method is evaluated on 173 manually segmented images extracted from 15 different subjects and shown to give accurate results, with 37% less errors than the Full Width at Half Maximum (FWHM) algorithm and 62% less than a similar graph cut method without coupled surfaces. Common measures of airway wall thickness such as the Interior Area (IA) and Wall Area percentage (WA%) was measured by the proposed method on a total of 723 CT scans from a lung cancer screening study. These measures were significantly different for participants with Chronic Obstructive Pulmonary Disease (COPD) compared to asymptomatic participants. Furthermore, reproducibility was good as confirmed by repeat scans and the measures correlated well with the outcomes of pulmonary function tests, demonstrating the use of the algorithm as a COPD diagnostic tool. Additionally, a new measure of airway wall thickness is proposed, Normalized Wall Intensity Sum (NWIS). NWIS is shown to correlate better with lung function test values and to be more reproducible than previous measures IA, WA% and airway wall thickness at a lumen perimeter of 10 mm (PI10).

  6. An Integrated Processing Strategy for Mountain Glacier Motion Monitoring Based on SAR Images

    NASA Astrophysics Data System (ADS)

    Ruan, Z.; Yan, S.; Liu, G.; LV, M.

    2017-12-01

    Mountain glacier dynamic variables are important parameters in studies of environment and climate change in High Mountain Asia. Due to the increasing events of abnormal glacier-related hazards, research of monitoring glacier movements has attracted more interest during these years. Glacier velocities are sensitive and changing fast under complex conditions of high mountain regions, which implies that analysis of glacier dynamic changes requires comprehensive and frequent observations with relatively high accuracy. Synthetic aperture radar (SAR) has been successfully exploited to detect glacier motion in a number of previous studies, usually with pixel-tracking and interferometry methods. However, the traditional algorithms applied to mountain glacier regions are constrained by the complex terrain and diverse glacial motion types. Interferometry techniques are prone to fail in mountain glaciers because of their narrow size and the steep terrain, while pixel-tracking algorithm, which is more robust in high mountain areas, is subject to accuracy loss. In order to derive glacier velocities continually and efficiently, we propose a modified strategy to exploit SAR data information for mountain glaciers. In our approach, we integrate a set of algorithms for compensating non-glacial-motion-related signals which exist in the offset values retrieved by sub-pixel cross-correlation of SAR image pairs. We exploit modified elastic deformation model to remove the offsets associated with orbit and sensor attitude, and for the topographic residual offset we utilize a set of operations including DEM-assisted compensation algorithm and wavelet-based algorithm. At the last step of the flow, an integrated algorithm combining phase and intensity information of SAR images will be used to improve regional motion results failed in cross-correlation related processing. The proposed strategy is applied to the West Kunlun Mountain and Muztagh Ata region in western China using ALOS/PALSAR data. The results show that the strategy can effectively improve the accuracy of velocity estimation by reducing the mean and standard deviation values from 0.32 m and 0.4 m to 0.16 m. It is proved to be highly appropriate for monitoring glacier motion over a widely varying range of ice velocities with a relatively high accuracy.

  7. Information retrieval based on single-pixel optical imaging with quick-response code

    NASA Astrophysics Data System (ADS)

    Xiao, Yin; Chen, Wen

    2018-04-01

    Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.

  8. Group sparse multiview patch alignment framework with view consistency for image classification.

    PubMed

    Gui, Jie; Tao, Dacheng; Sun, Zhenan; Luo, Yong; You, Xinge; Tang, Yuan Yan

    2014-07-01

    No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting l(2,1)-norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification.

  9. Correction of clipped pixels in color images.

    PubMed

    Xu, Di; Doutre, Colin; Nasiopoulos, Panos

    2011-03-01

    Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels. © 2011 IEEE

  10. Application of MCM image construction to IRAS comet observations

    NASA Technical Reports Server (NTRS)

    Schlapfer, Martin F.; Walker, Russell G.

    1994-01-01

    There is a wealth of IRAS comet data, obtained in both the survey and pointed observations modes. However, these measurements have remained largely untouched due to difficulties in removing instrumental effects from the data. We have developed a version of the Maximum Correlation Method for Image Construction algorithm (MCM) which operates in the moving coordinate system of the comet and properly treats both real cometary motion and apparent motion due to spacecraft parallax. This algorithm has been implemented on a 486/33 PC in FORTRAN and IDL codes. Preprocessing of the IRAS CRDD includes baseline removal, deglitching, and removal of long tails due to dielectric time constants of the detectors. The resulting images are virtually free from instrumental effects and have the highest possible spatial resolution consistent with the data sampling. We present examples of high resolution IRAS images constructed from survey observations of Comets P/Tempel 1 and P/Tempel 2, and pointed observations of IRAS-Araki-Alcock.

  11. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  12. A comparison study of image features between FFDM and film mammogram images

    PubMed Central

    Jing, Hao; Yang, Yongyi; Wernick, Miles N.; Yarusso, Laura M.; Nishikawa, Robert M.

    2012-01-01

    Purpose: This work is to provide a direct, quantitative comparison of image features measured by film and full-field digital mammography (FFDM). The purpose is to investigate whether there is any systematic difference between film and FFDM in terms of quantitative image features and their influence on the performance of a computer-aided diagnosis (CAD) system. Methods: The authors make use of a set of matched film-FFDM image pairs acquired from cadaver breast specimens with simulated microcalcifications consisting of bone and teeth fragments using both a GE digital mammography system and a screen-film system. To quantify the image features, the authors consider a set of 12 textural features of lesion regions and six image features of individual microcalcifications (MCs). The authors first conduct a direct comparison on these quantitative features extracted from film and FFDM images. The authors then study the performance of a CAD classifier for discriminating between MCs and false positives (FPs) when the classifier is trained on images of different types (film, FFDM, or both). Results: For all the features considered, the quantitative results show a high degree of correlation between features extracted from film and FFDM, with the correlation coefficients ranging from 0.7326 to 0.9602 for the different features. Based on a Fisher sign rank test, there was no significant difference observed between the features extracted from film and those from FFDM. For both MC detection and discrimination of FPs from MCs, FFDM had a slight but statistically significant advantage in performance; however, when the classifiers were trained on different types of images (acquired with FFDM or SFM) for discriminating MCs from FPs, there was little difference. Conclusions: The results indicate good agreement between film and FFDM in quantitative image features. While FFDM images provide better detection performance in MCs, FFDM and film images may be interchangeable for the purposes of training CAD algorithms, and a single CAD algorithm may be applied to either type of images. PMID:22830771

  13. A comparison of neural network and fuzzy clustering techniques in segmenting magnetic resonance images of the brain.

    PubMed

    Hall, L O; Bensaid, A M; Clarke, L P; Velthuizen, R P; Silbiger, M S; Bezdek, J C

    1992-01-01

    Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms, and a supervised computational neural network. Initial clinical results are presented on normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. For a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed, with fuzz-c-means approaches being slightly preferred over feedforward cascade correlation results. Various facets of both approaches, such as supervised versus unsupervised learning, time complexity, and utility for the diagnostic process, are compared.

  14. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  15. Correlating objective and subjective evaluation of texture appearance with applications to camera phone imaging

    NASA Astrophysics Data System (ADS)

    Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.

    2009-01-01

    Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.

  16. A satellite snow depth multi-year average derived from SSM/I for the high latitude regions

    USGS Publications Warehouse

    Biancamaria, S.; Mognard, N.M.; Boone, A.; Grippa, M.; Josberger, E.G.

    2008-01-01

    The hydrological cycle for high latitude regions is inherently linked with the seasonal snowpack. Thus, accurately monitoring the snow depth and the associated aerial coverage are critical issues for monitoring the global climate system. Passive microwave satellite measurements provide an optimal means to monitor the snowpack over the arctic region. While the temporal evolution of snow extent can be observed globally from microwave radiometers, the determination of the corresponding snow depth is more difficult. A dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from Special Sensor Microwave/Imager (SSM/I) brightness temperatures and was validated over the U.S. Great Plains and Western Siberia. The purpose of this study is to assess the dynamic algorithm performance over the entire high latitude (land) region by computing a snow depth multi-year field for the time period 1987-1995. This multi-year average is compared to the Global Soil Wetness Project-Phase2 (GSWP2) snow depth computed from several state-of-the-art land surface schemes and averaged over the same time period. The multi-year average obtained by the dynamic algorithm is in good agreement with the GSWP2 snow depth field (the correlation coefficient for January is 0.55). The static algorithm, which assumes a constant snow grain size in space and time does not correlate with the GSWP2 snow depth field (the correlation coefficient with GSWP2 data for January is - 0.03), but exhibits a very high anti-correlation with the NCEP average January air temperature field (correlation coefficient - 0.77), the deepest satellite snow pack being located in the coldest regions, where the snow grain size may be significantly larger than the average value used in the static algorithm. The dynamic algorithm performs better over Eurasia (with a correlation coefficient with GSWP2 snow depth equal to 0.65) than over North America (where the correlation coefficient decreases to 0.29). ?? 2007 Elsevier Inc. All rights reserved.

  17. Cloud Screening and Quality Control Algorithm for Star Photometer Data: Assessment with Lidar Measurements and with All-sky Images

    NASA Technical Reports Server (NTRS)

    Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.

    2012-01-01

    This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.

  18. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  19. Influence of the internal wall thickness of electrical capacitance tomography sensors on image quality

    NASA Astrophysics Data System (ADS)

    Liang, Shiguo; Ye, Jiamin; Wang, Haigang; Wu, Meng; Yang, Wuqiang

    2018-03-01

    In the design of electrical capacitance tomography (ECT) sensors, the internal wall thickness can vary with specific applications, and it is a key factor that influences the sensitivity distribution and image quality. This paper will discuss the effect of the wall thickness of ECT sensors on image quality. Three flow patterns are simulated for wall thicknesses of 2.5 mm to 15 mm on eight-electrode ECT sensors. The sensitivity distributions and potential distributions are compared for different wall thicknesses. Linear back-projection and Landweber iteration algorithms are used for image reconstruction. Relative image error and correlation coefficients are used for image evaluation using both simulation and experimental data.

  20. A VLSI implementation for synthetic aperture radar image processing

    NASA Technical Reports Server (NTRS)

    Premkumar, A.; Purviance, J.

    1990-01-01

    A simple physical model for the Synthetic Aperture Radar (SAR) is presented. This model explains the one dimensional and two dimensional nature of the received SAR signal in the range and azimuth directions. A time domain correlator, its algorithm, and features are explained. The correlator is ideally suited for VLSI implementation. A real time SAR architecture using these correlators is proposed. In the proposed architecture, the received SAR data is processed using one dimensional correlators for determining the range while two dimensional correlators are used to determine the azimuth of a target. The architecture uses only three different types of custom VLSI chips and a small amount of memory.

  1. Automated aortic calcification detection in low-dose chest CT images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Htwe, Yu Maw; Padgett, Jennifer; Henschke, Claudia; Yankelevitz, David; Reeves, Anthony P.

    2014-03-01

    The extent of aortic calcification has been shown to be a risk indicator for vascular events including cardiac events. We have developed a fully automated computer algorithm to segment and measure aortic calcification in low-dose noncontrast, non-ECG gated, chest CT scans. The algorithm first segments the aorta using a pre-computed Anatomy Label Map (ALM). Then based on the segmented aorta, aortic calcification is detected and measured in terms of the Agatston score, mass score, and volume score. The automated scores are compared with reference scores obtained from manual markings. For aorta segmentation, the aorta is modeled as a series of discrete overlapping cylinders and the aortic centerline is determined using a cylinder-tracking algorithm. Then the aortic surface location is detected using the centerline and a triangular mesh model. The segmented aorta is used as a mask for the detection of aortic calcification. For calcification detection, the image is first filtered, then an elevated threshold of 160 Hounsfield units (HU) is used within the aorta mask region to reduce the effect of noise in low-dose scans, and finally non-aortic calcification voxels (bony structures, calcification in other organs) are eliminated. The remaining candidates are considered as true aortic calcification. The computer algorithm was evaluated on 45 low-dose non-contrast CT scans. Using linear regression, the automated Agatston score is 98.42% correlated with the reference Agatston score. The automated mass and volume score is respectively 98.46% and 98.28% correlated with the reference mass and volume score.

  2. Implementation of several mathematical algorithms to breast tissue density classification

    NASA Astrophysics Data System (ADS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  3. Optical correlation based pose estimation using bipolar phase grayscale amplitude spatial light modulators

    NASA Astrophysics Data System (ADS)

    Outerbridge, Gregory John, II

    Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.

  4. Acoustic change detection algorithm using an FM radio

    NASA Astrophysics Data System (ADS)

    Goldman, Geoffrey H.; Wolfe, Owen

    2012-06-01

    The U.S. Army is interested in developing low-cost, low-power, non-line-of-sight sensors for monitoring human activity. One modality that is often overlooked is active acoustics using sources of opportunity such as speech or music. Active acoustics can be used to detect human activity by generating acoustic images of an area at different times, then testing for changes among the imagery. A change detection algorithm was developed to detect physical changes in a building, such as a door changing positions or a large box being moved using acoustics sources of opportunity. The algorithm is based on cross correlating the acoustic signal measured from two microphones. The performance of the algorithm was shown using data generated with a hand-held FM radio as a sound source and two microphones. The algorithm could detect a door being opened in a hallway.

  5. Detection of hypercholesterolemia using hyperspectral imaging of human skin

    NASA Astrophysics Data System (ADS)

    Milanic, Matija; Bjorgan, Asgeir; Larsson, Marcus; Strömberg, Tomas; Randeberg, Lise L.

    2015-07-01

    Hypercholesterolemia is characterized by high blood levels of cholesterol and is associated with increased risk of atherosclerosis and cardiovascular disease. Xanthelasma is a subcutaneous lesion appearing in the skin around the eyes. Xanthelasma is related to hypercholesterolemia. Identifying micro-xanthelasma can thereforeprovide a mean for early detection of hypercholesterolemia and prevent onset and progress of disease. The goal of this study was to investigate spectral and spatial characteristics of hypercholesterolemia in facial skin. Optical techniques like hyperspectral imaging (HSI) might be a suitable tool for such characterization as it simultaneously provides high resolution spatial and spectral information. In this study a 3D Monte Carlo model of lipid inclusions in human skin was developed to create hyperspectral images in the spectral range 400-1090 nm. Four lesions with diameters 0.12-1.0 mm were simulated for three different skin types. The simulations were analyzed using three algorithms: the Tissue Indices (TI), the two layer Diffusion Approximation (DA), and the Minimum Noise Fraction transform (MNF). The simulated lesions were detected by all methods, but the best performance was obtained by the MNF algorithm. The results were verified using data from 11 volunteers with known cholesterol levels. The face of the volunteers was imaged by a LCTF system (400- 720 nm), and the images were analyzed using the previously mentioned algorithms. The identified features were then compared to the known cholesterol levels of the subjects. Significant correlation was obtained for the MNF algorithm only. This study demonstrates that HSI can be a promising, rapid modality for detection of hypercholesterolemia.

  6. Terrestrial laser scanning point clouds time series for the monitoring of slope movements: displacement measurement using image correlation and 3D feature tracking

    NASA Astrophysics Data System (ADS)

    Bornemann, Pierrick; Jean-Philippe, Malet; André, Stumpf; Anne, Puissant; Julien, Travelletti

    2016-04-01

    Dense multi-temporal point clouds acquired with terrestrial laser scanning (TLS) have proved useful for the study of structure and kinematics of slope movements. Most of the existing deformation analysis methods rely on the use of interpolated data. Approaches that use multiscale image correlation provide a precise and robust estimation of the observed movements; however, for non-rigid motion patterns, these methods tend to underestimate all the components of the movement. Further, for rugged surface topography, interpolated data introduce a bias and a loss of information in some local places where the point cloud information is not sufficiently dense. Those limits can be overcome by using deformation analysis exploiting directly the original 3D point clouds assuming some hypotheses on the deformation (e.g. the classic ICP algorithm requires an initial guess by the user of the expected displacement patterns). The objective of this work is therefore to propose a deformation analysis method applied to a series of 20 3D point clouds covering the period October 2007 - October 2015 at the Super-Sauze landslide (South East French Alps). The dense point clouds have been acquired with a terrestrial long-range Optech ILRIS-3D laser scanning device from the same base station. The time series are analyzed using two approaches: 1) a method of correlation of gradient images, and 2) a method of feature tracking in the raw 3D point clouds. The estimated surface displacements are then compared with GNSS surveys on reference targets. Preliminary results tend to show that the image correlation method provides a good estimation of the displacement fields at first order, but shows limitations such as the inability to track some deformation patterns, and the use of a perspective projection that does not maintain original angles and distances in the correlated images. Results obtained with 3D point clouds comparison algorithms (C2C, ICP, M3C2) bring additional information on the displacement fields. Displacement fields derived from both approaches are then combined and provide a better understanding of the landslide kinematics.

  7. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  8. A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment

    NASA Astrophysics Data System (ADS)

    Wu, Qingbo; Li, Hongliang; Meng, Fanman; Ngan, King N.

    2018-05-01

    In the field of objective image quality assessment (IQA), the Spearman's $\\rho$ and Kendall's $\\tau$ are two most popular rank correlation indicators, which straightforwardly assign uniform weight to all quality levels and assume each pair of images are sortable. They are successful for measuring the average accuracy of an IQA metric in ranking multiple processed images. However, two important perceptual properties are ignored by them as well. Firstly, the sorting accuracy (SA) of high quality images are usually more important than the poor quality ones in many real world applications, where only the top-ranked images would be pushed to the users. Secondly, due to the subjective uncertainty in making judgement, two perceptually similar images are usually hardly sortable, whose ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, we explore a perceptually weighted rank correlation indicator in this paper, which rewards the capability of correctly ranking high quality images, and suppresses the attention towards insensitive rank mistakes. More specifically, we focus on activating `valid' pairwise comparison towards image quality, whose difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned an unique weight, which is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a more sophisticated SA-ST curve, rather than a single rank correlation coefficient. The proposed indicator offers a new insight for interpreting visual perception behaviors. Furthermore, the applicability of our indicator is validated in recommending robust IQA metrics for both the degraded and enhanced image data.

  9. Application of optical correlation techniques to particle imaging velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1988-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of velocity vectors across an extended 2-dimensional region of the flow field. The application of optical correlation techniques to the analysis of multiple exposure laser light sheet photographs can reduce and/or simplify the data reduction time and hardware. Here, Matched Spatial Filters (MSF) are used in a pattern recognition system. Usually MSFs are used to identify the assembly line parts. In this application, the MSFs are used to identify the iso-velocity vector contours in the flow. The patterns to be recognized are the recorded particle images in a pulsed laser light sheet photograph. Measurement of the direction of the partical image displacements between exposures yields the velocity vector. The particle image exposure sequence is designed such that the velocity vector direction is determined unambiguously. A global analysis technique is used in comparison to the more common particle tracking algorithms and Young's fringe analysis technique.

  10. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  11. Real-time reconstruction of three-dimensional brain surface MR image using new volume-surface rendering technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Momose, T.; Oku, S.

    It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less

  12. Local image statistics: maximum-entropy constructions and perceptual salience

    PubMed Central

    Victor, Jonathan D.; Conte, Mary M.

    2012-01-01

    The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397

  13. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  14. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  15. MO-F-CAMPUS-J-03: Sorting 2D Dynamic MR Images Using Internal Respiratory Signal for 4D MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, Z; Hui, C; Beddar, S

    Purpose: To develop a novel algorithm to extract internal respiratory signal (IRS) for sorting dynamic magnetic resonance (MR) images in order to achieve four-dimensional (4D) MR imaging. Methods: Dynamic MR images were obtained with the balanced steady state free precession by acquiring each two-dimensional sagittal slice repeatedly for more than one breathing cycle. To generate a robust IRS, we used 5 different representative internal respiratory surrogates in both the image space (body area) and the Fourier space (the first two low-frequency phase components in the anterior-posterior direction, and the first two low-frequency phase components in the superior-inferior direction). A clusteringmore » algorithm was then used to search for a group of similar individual internal signals, which was then used to formulate the final IRS. A phantom study and a volunteer study were performed to demonstrate the effectiveness of this algorithm. The IRS was compared to the signal from the respiratory bellows. Results: The IRS computed by our algorithm matched well with the bellows signal in both the phantom and the volunteer studies. On average, the normalized cross correlation between the IRS and the bellows signal was 0.97 in the phantom study and 0.87 in the volunteer study, respectively. The average difference between the end inspiration times in the IRS and bellows signal was 0.18 s in the phantom study and 0.14 s in the volunteer study, respectively. 4D images sorted based on the IRS showed minimal mismatched artifacts, and the motion of the anatomy was coherent with the respiratory phases. Conclusion: A novel algorithm was developed to generate IRS from dynamic MR images to achieve 4D MR imaging. The performance of the IRS was comparable to that of the bellows signal. It can be easily implemented into the clinic and potentially could replace the use of external respiratory surrogates. This research was partially funded by the the Center for Radiation Oncology Research from UT MD Anderson Cancer Center.« less

  16. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer’s general analysis software at much lower cost. Relatively simple processing techniques could lead to customized, unsupervised or partially supervised methods that can successfully perform the desirable analysis and adapt to the specific disease requirements.

  17. Open-source sea ice drift algorithm for Sentinel-1 SAR imagery using a combination of feature-tracking and pattern-matching

    NASA Astrophysics Data System (ADS)

    Muckenhuber, Stefan; Sandven, Stein

    2017-04-01

    An open-source sea ice drift algorithm for Sentinel-1 SAR imagery is introduced based on the combination of feature-tracking and pattern-matching. A computational efficient feature-tracking algorithm produces an initial drift estimate and limits the search area for the pattern-matching, that provides small to medium scale drift adjustments and normalised cross correlation values as quality measure. The algorithm is designed to utilise the respective advantages of the two approaches and allows drift calculation at user defined locations. The pre-processing of the Sentinel-1 data has been optimised to retrieve a feature distribution that depends less on SAR backscatter peak values. A recommended parameter set for the algorithm has been found using a representative image pair over Fram Strait and 350 manually derived drift vectors as validation. Applying the algorithm with this parameter setting, sea ice drift retrieval with a vector spacing of 8 km on Sentinel-1 images covering 400 km x 400 km, takes less than 3.5 minutes on a standard 2.7 GHz processor with 8 GB memory. For validation, buoy GPS data, collected in 2015 between 15th January and 22nd April and covering an area from 81° N to 83.5° N and 12° E to 27° E, have been compared to calculated drift results from 261 corresponding Sentinel-1 image pairs. We found a logarithmic distribution of the error with a peak at 300 m. All software requirements necessary for applying the presented sea ice drift algorithm are open-source to ensure free implementation and easy distribution.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nam, Hyeong Soo; Kim, Chang-Soo; Yoo, Hongki, E-mail: kjwmm@korea.ac.kr, E-mail: hyoo@hanyang.ac.kr

    Purpose: Intravascular optical coherence tomography (IV-OCT) is a high-resolution imaging method used to visualize the microstructure of arterial walls in vivo. IV-OCT enables the clinician to clearly observe and accurately measure stent apposition and neointimal coverage of coronary stents, which are associated with side effects such as in-stent thrombosis. In this study, the authors present an algorithm for quantifying stent apposition and neointimal coverage by automatically detecting lumen contours and stent struts in IV-OCT images. Methods: The algorithm utilizes OCT intensity images and their first and second gradient images along the axial direction to detect lumen contours and stent strutmore » candidates. These stent strut candidates are classified into true and false stent struts based on their features, using an artificial neural network with one hidden layer and ten nodes. After segmentation, either the protrusion distance (PD) or neointimal thickness (NT) for each strut is measured automatically. In randomly selected image sets covering a large variety of clinical scenarios, the results of the algorithm were compared to those of manual segmentation by IV-OCT readers. Results: Stent strut detection showed a 96.5% positive predictive value and a 92.9% true positive rate. In addition, case-by-case validation also showed comparable accuracy for most cases. High correlation coefficients (R > 0.99) were observed for PD and NT between the algorithmic and the manual results, showing little bias (0.20 and 0.46 μm, respectively) and a narrow range of limits of agreement (36 and 54 μm, respectively). In addition, the algorithm worked well in various clinical scenarios and even in cases with a low level of stent malapposition and neointimal coverage. Conclusions: The presented automatic algorithm enables robust and fast detection of lumen contours and stent struts and provides quantitative measurements of PD and NT. In addition, the algorithm was validated using various clinical cases to demonstrate its reliability. Therefore, this technique can be effectively utilized for clinical trials on stent-related side effects, including in-stent thrombosis and in-stent restenosis.« less

  19. Ultrafast image-based dynamic light scattering for nanoparticle sizing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Wu; Zhang, Jie; Liu, Lili

    An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less

  20. Evaluation of automatic image quality assessment in chest CT - A human cadaver study.

    PubMed

    Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus

    2017-04-01

    The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

Top