Science.gov

Sample records for improving image analysis

  1. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  2. Textural Analysis of Hyperspectral Images for Improving Contaminant Detection Accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Previous studies demonstrated a hyperspectral imaging system has a potential for poultry fecal contaminant detection by measuring reflectance intensity. The simple image ratio at 565 and 517-nm images with optimal thresholding was able to detect fecal contaminants on broiler carcasses with high acc...

  3. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  4. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  5. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  6. [Spine disc MR image analysis using improved independent component analysis based active appearance model and Markov random field].

    PubMed

    Hao, Shijie; Zhan, Shu; Jiang, Jianguo; Li, Hong; Ian, Rosse

    2010-02-01

    As there are not many research reports on segmentation and quantitative analysis of soft tissues in lumbar medical images, this paper presents an algorithm for segmenting and quantitatively analyzing discs in lumbar Magnetic Resonance Imaging (MRI). Vertebrae are first segmented using improved Independent component analysis based active appearance model (ICA-AAM), and lumbar curve is obtained with Minimum Description Length (MDL); based on these results, fast and unsupervised Markov Random Field (MRF) disc segmentation combining disc imaging features and intensity profile is further achieved; finally, disc herniation is quantitatively evaluated. The experiment proves that the proposed algorithm is fast and effective, thus providing doctors with aid in diagnosing and curing lumbar disc herniation.

  7. Improvement in DMSA imaging using adaptive noise reduction: an ROC analysis.

    PubMed

    Lorimer, Lisa; Gemmell, Howard G; Sharp, Peter F; McKiddie, Fergus I; Staff, Roger T

    2012-11-01

    Dimercaptosuccinic acid imaging is the 'gold standard' for the detection of cortical defects and diagnosis of scarring of the kidneys. The Siemens planar processing package, which implements adaptive noise reduction using the Pixon algorithm, is designed to allow a reduction in image noise, enabling improved image quality and reduced acquisition time/injected activity. This study aimed to establish the level of improvement in image quality achievable using this algorithm. Images were acquired of a phantom simulating a single kidney with a range of defects of varying sizes, positions and contrasts. These images were processed using the Pixon processing software and shown to 12 observers (six experienced and six novices) who were asked to rate the images on a six-point scale depending on their confidence that a defect was present. The data were analysed using a receiver operating characteristic approach. Results showed that processed images significantly improved the performance of the experienced observers in terms of their sensitivity and specificity. Although novice observers showed significant increase in sensitivity when using the software, a significant decrease in specificity was also seen. This study concludes that the Pixon software can be used to improve the assessment of cortical defects in dimercaptosuccinic acid imaging by suitably trained observers.

  8. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility.

    PubMed

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H

    2016-01-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability. PMID:27581626

  9. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility.

    PubMed

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H

    2016-09-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability.

  10. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility

    PubMed Central

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H.

    2016-01-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability. PMID:27581626

  11. Implementation of improved interactive image analysis at the Advanced Photon Source (APC) linac.

    SciTech Connect

    Arnold, N.

    1998-09-11

    An image-analysis system, based on commercially available data visualization software (IDL [1]), allows convenient interaction with image data while still providing calculated beam parameters at a rate of up to 2 Hz. Image data are transferred from the IOC to the workstation via EPICS [2] channel access. A custom EPICS record was created in order to overcome the channel access limit of 16k bytes per array. The user can conveniently calibrate optical transition radiation (OTR) and fluorescent screens, capture background images, acquire and average a series of images, and specify several other filtering and viewing options. The images can be saved in either IDL format or APS-standard format (SDDS [3]), allowing for rapid postprocessing of image data by numerous other software tools.

  12. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  13. Theoretical Analysis of the Sensitivity and Speed Improvement of ISIS over a Comparable Traditional Hyperspectral Imager

    SciTech Connect

    Brian R. Stallard; Stephen M. Gentry

    1998-09-01

    The analysis presented herein predicts that, under signal-independent noise limited conditions, an Information-efficient Spectral Imaging Sensor (ISIS) style hyperspectral imaging system design can obtain significant signal-to-noise ratio (SNR) and speed increase relative to a comparable traditional hyperspectral imaging (HSI) instrument. Factors of forty are reasonable for a single vector, and factors of eight are reasonable for a five-vector measurement. These advantages can be traded with other system parameters in an overall sensor system design to allow a variety of applications to be done that otherwise would be impossible within the constraints of the traditional HSI style design.

  14. Non-rigid registration and non-local principle component analysis to improve electron microscopy spectrum images

    NASA Astrophysics Data System (ADS)

    Yankovich, Andrew B.; Zhang, Chenyu; Oh, Albert; Slater, Thomas J. A.; Azough, Feridoon; Freer, Robert; Haigh, Sarah J.; Willett, Rebecca; Voyles, Paul M.

    2016-09-01

    Image registration and non-local Poisson principal component analysis (PCA) denoising improve the quality of characteristic x-ray (EDS) spectrum imaging of Ca-stabilized Nd2/3TiO3 acquired at atomic resolution in a scanning transmission electron microscope. Image registration based on the simultaneously acquired high angle annular dark field image significantly outperforms acquisition with a long pixel dwell time or drift correction using a reference image. Non-local Poisson PCA denoising reduces noise more strongly than conventional weighted PCA while preserving atomic structure more faithfully. The reliability of and optimal internal parameters for non-local Poisson PCA denoising of EDS spectrum images is assessed using tests on phantom data.

  15. Image Improvement Techniques

    NASA Astrophysics Data System (ADS)

    Shine, R. A.

    1997-05-01

    Over the last decade, a repertoire of techniques have been developed and/or refined to improve the quality of high spatial resolution solar movies taken from ground based observatories. These include real time image motion corrections, frame selection, phase diversity measurements of the wavefront, and extensive post processing to partially remove atmospheric distortion. Their practical application has been made possible by the increasing availability and decreasing cost of large CCD's with fast digital readouts and high speed computer workstations with large memories. Most successful have been broad band (0.3 to 10 nm) filtergram movies which can use exposure times of 10 to 30 ms, short enough to ``freeze'' atmospheric motions. Even so, only a handful of movies with excellent image quality for more than a hour have been obtained to date. Narrowband filtergrams (about 0.01 nm), such as those required for constructing magnetograms and Dopplergrams, have been more challenging although some single images approach the quality of the best continuum images. Some promising new techniques and instruments, together with persistence and good luck, should continue the progress made in the last several years.

  16. Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials

    NASA Astrophysics Data System (ADS)

    Ryan, C. G.; Laird, J. S.; Fisher, L. A.; Kirkham, R.; Moorhead, G. F.

    2015-11-01

    The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.

  17. Analysis of explosion in enclosure based on improved method of images

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Guo, J.; Yao, X.; Chen, G.; Zhu, X.

    2016-05-01

    The aim of this paper is to present an improved method to calculate the pressure loading on walls during a confined explosion. When an explosion occurs inside of an enclosure, reflected shock waves produce multiple pressure peaks at a given wall location, especially at the corners. The effects of confined blast loading may bring about more serious damage to the structure due to multiple shock reflection. An approach, first proposed by Chan to describe the track of shock waves based on the mirror reflecting theory, using the method of images (MOI) is proposed to simplify internal explosion loading calculations. An improved method of images is proposed that takes into account wall openings and oblique reflections that cannot be considered with the standard MOI. The approach, validated using experimental data, provides a simplified and quick approach for loading calculation of a confined explosion. The results show that the peak overpressure tends to decline as the measurement point moves away from the center, and increases sharply as it approaches the enclosure corners. The specific impulse increases from the center to the corners. The improved method is capable of predicting pressure-time history and impulse with an accuracy comparable to that of three-dimensional AUTODYN code predictions.

  18. Improving Three-Dimensional (3D) Range Gated Reconstruction Through Time-of-Flight (TOF) Imaging Analysis

    NASA Astrophysics Data System (ADS)

    Chua, S. Y.; Wang, X.; Guo, N.; Tan, C. S.; Chai, T. Y.; Seet, G. L.

    2016-04-01

    This paper performs an experimental investigation on the TOF imaging profile which strongly influences the quality of reconstruction to accomplish accurate range sensing. From our analysis, the reflected intensity profile recorded appears to deviate from Gaussian model which is commonly assumed and can be perceived as a mixture of noises and actual reflected signal. Noise-weighted Average range calculation is therefore proposed to alleviate noise influence based on the signal detection threshold and system noises. From our experimental result, this alternative range solution demonstrates better accuracy as compared to the conventional weighted average method and proven as a para-axial correction to improve range reconstruction in 3D gated imaging system.

  19. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  20. Improved structure, function and compatibility for CellProfiler: modular high-throughput image analysis software

    PubMed Central

    Kamentsky, Lee; Jones, Thouis R.; Fraser, Adam; Bray, Mark-Anthony; Logan, David J.; Madden, Katherine L.; Ljosa, Vebjorn; Rueden, Curtis; Eliceiri, Kevin W.; Carpenter, Anne E.

    2011-01-01

    Summary: There is a strong and growing need in the biology research community for accurate, automated image analysis. Here, we describe CellProfiler 2.0, which has been engineered to meet the needs of its growing user base. It is more robust and user friendly, with new algorithms and features to facilitate high-throughput work. ImageJ plugins can now be run within a CellProfiler pipeline. Availability and Implementation: CellProfiler 2.0 is free and open source, available at http://www.cellprofiler.org under the GPL v. 2 license. It is available as a packaged application for Macintosh OS X and Microsoft Windows and can be compiled for Linux. Contact: anne@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21349861

  1. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  2. Temporal change analysis for improved tumor detection in dedicated CT breast imaging using affine and free-form deformation

    NASA Astrophysics Data System (ADS)

    Dey, Joyoni; O'Connor, J. Michael; Chen, Yu; Glick, Stephen J.

    2008-03-01

    Preliminary evidence has suggested that computerized tomographic (CT) imaging of the breast using a cone-beam, flat-panel detector system dedicated solely to breast imaging has potential for improving detection and diagnosis of early-stage breast cancer. Hypothetically, a powerful mechanism for assisting in early stage breast cancer detection from annual screening breast CT studies would be to examine temporal changes in the breast from year-to-year. We hypothesize that 3D image registration could be used to automatically register breast CT volumes scanned at different times (e.g., yearly screening exams). This would allow radiologists to quickly visualize small changes in the breast that have developed during the period since the last screening CT scan, and use this information to improve the diagnostic accuracy of early-stage breast cancer detection. To test our hypothesis, fresh mastectomy specimens were imaged with a flat-panel CT system at different time points, after moving the specimen to emulate the re-positioning motion of the breast between yearly screening exams. Synthetic tumors were then digitally inserted into the second CT scan at a clinically realistic location (to emulate tumor growth from year-to-year). An affine and a spline-based 3D image registration algorithm was implemented and applied to the CT reconstructions of the specimens acquired at different times. Subtraction of registered image volumes was then performed to better analyze temporal change. Results from this study suggests that temporal change analysis in 3D breast CT can potentially be a powerful tool in improving the visualization of small lesion growth.

  3. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects

    NASA Astrophysics Data System (ADS)

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-01

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  4. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects.

    PubMed

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-21

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI. PMID:26948513

  5. Lucky imaging: improved localization accuracy for single molecule imaging.

    PubMed

    Cronin, Bríd; de Wet, Ben; Wallace, Mark I

    2009-04-01

    We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells.

  6. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method.

  7. Improved Helioseismic Analysis of Medium-ℓ Data from the Michelson Doppler Imager

    NASA Astrophysics Data System (ADS)

    Larson, Timothy P.; Schou, Jesper

    2015-11-01

    We present a comprehensive study of one method for measuring various parameters of global modes of oscillation of the Sun. Using velocity data taken by the Michelson Doppler Imager (MDI), we analyze spherical harmonic degrees ℓ≤300. Both current and historical methodologies are explained, and the various differences between the two are investigated to determine their effects on global-mode parameters and systematic errors in the analysis. These differences include a number of geometric corrections made during spherical harmonic decomposition; updated routines for generating window functions, detrending time series, and filling gaps; and consideration of physical effects such as mode-profile asymmetry, horizontal displacement at the solar surface, and distortion of eigenfunctions by differential rotation. We apply these changes one by one to three years of data, and then reanalyze the entire MDI mission applying all of them, using both the original 72-day long time series and 360-day long time series. We find significant changes in mode parameters, both as a result of the various changes to the processing, as well as between the 72-day and 360-day analyses. We find reduced residuals of inversions for internal rotation, but seeming artifacts remain, such as the peak in the rotation rate near the surface at high latitudes. An annual periodicity in the f-mode frequencies is also investigated.

  8. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. PMID:26804669

  9. Computerized multiple image analysis on mammograms: performance improvement of nipple identification for registration of multiple views using texture convergence analyses

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana

    2004-05-01

    Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be

  10. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  11. Improvement of CAT scanned images

    NASA Technical Reports Server (NTRS)

    Roberts, E., Jr.

    1980-01-01

    Digital enhancement procedure improves definition of images. Tomogram is generated from large number of X-ray beams. Beams are collimated and small in diameter. Scanning device passes beams sequentially through human subject at many different angles. Battery of transducers opposite subject senses attenuated signals. Signals are transmitted to computer where they are used in construction of image on transverse plane through body.

  12. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  13. Improvements of image fusion methods

    NASA Astrophysics Data System (ADS)

    Ben-Shoshan, Yotam; Yitzhaky, Yitzhak

    2014-03-01

    Fusion of images from different imaging modalities, obtained by conventional fusion methods, may cause artifacts, including destructive superposition and brightness irregularities, in certain cases. This paper proposes two methods for improving image multimodal fusion quality. Based on the finding that a better fusion can be achieved when the images have a more positive correlation, the first method is a decision algorithm that runs at the preprocessing fusion stage and determines whether a complementary gray level of one of the input images should be used instead of the original one. The second method is suitable for multiresolution fusion, and it suggests choosing only one image from the lowest-frequency sub-bands in the pyramids, instead of combining values from both sub-bands. Experimental results indicate that the proposed fusion enhancement can reduce fusion artifacts. Quantitative fusion quality measures that support this conclusion are shown.

  14. Improved defect analysis of Gallium Arsenide solar cells using image enhancement

    NASA Technical Reports Server (NTRS)

    Kilmer, Louis C.; Honsberg, Christiana; Barnett, Allen M.; Phillips, James E.

    1989-01-01

    A new technique has been developed to capture, digitize, and enhance the image of light emission from a forward biased direct bandgap solar cell. Since the forward biased light emission from a direct bandgap solar cell has been shown to display both qualitative and quantitative information about the solar cell's performance and its defects, signal processing techniques can be applied to the light emission images to identify and analyze shunt diodes. Shunt diodes are of particular importance because they have been found to be the type of defect which is likely to cause failure in a GaAs solar cell. The presence of a shunt diode can be detected from the light emission by using a photodetector to measure the quantity of light emitted at various current densities. However, to analyze how the shunt diodes affect the quality of the solar cell the pattern of the light emission must be studied. With the use of image enhancement routines, the light emission can be studied at low light emission levels where shunt diode effects are dominant.

  15. Magnetic Resonance Imaging Analysis of Dyssynchrony and Myocardial Scar Predicts Function Class Improvement following Cardiac Resynchronization Therapy

    PubMed Central

    Bilchick, Kenneth C.; Dimaano, Veronica; Wu, Katherine C.; Helm, Robert H.; Weiss, Robert G.; Lima, Joao A.; Berger, Ronald D.; Tomaselli, Gordon F.; Bluemke, David A.; Halperin, Henry R.; Abraham, Theodore; Kass, David A.; Lardo, Albert C.

    2009-01-01

    STRUCTURED ABSTRACT Objective We tested a circumferential mechanical dyssynchrony index (circumferential uniformity ratio estimate, or CURE; 0–1, 1=synchrony) derived from magnetic resonance myocardial tagging (MR-MT) for predicting clinical function class improvement following cardiac resynchronization therapy (CRT). Background There remains a significant nonresponse rate to CRT, with recent data questioning the reproducibility of standard echocardiography-based dyssynchrony metrics. MR-MT provides high quality mechanical activation data throughout the heart, and delayed enhancement magnetic resonance imaging (DEMRI) offers precise characterization of myocardial scar and scar distribution. Methods MR-MT was performed in patients with cardiomyopathy, divided into: 1) a CRT-HF cohort (n=20) with mean (SD) LVEF 0.23 (0.057) in order to evaluate the clinical use of MR-MT and DEMRI prior to CRT; and 2) a multimodality cohort (n=27) with mean (SD) LVEF 0.20 (0.066) in order to compare MR-MT and tissue Doppler imaging (TDI) assessments of mechanical dyssynchrony. MR-MT was also performed in 9 healthy control subjects. Results MR-MT showed that control subjects had highly synchronous contraction (mean [SD] CURE 0.96 [0.01]) while TDI septal-lateral delay indicated dyssynchrony in 44% of normal controls. Using a cutoff of <0.75 for CURE based on ROC analysis (AUC 0.889), 56% of patients tested positive for mechanical dyssynchrony, and the MR-MT CURE predicted improved function class in CRT-HF patients with 90% accuracy (PPV 87%; NPV 100%). Adding DEMRI (% total scar<15%) data improved accuracy further to 95% (PPV 93%; NPV 100%). The correlation between CURE and QRSd was modest in all cardiomyopathy subjects (r=0.58, p<0.001), and somewhat less in the CRT-HF group (r=0.40, p=0.08). The multimodality cohort showed a 30% discordance rate between CURE and TDI septal-lateral delay. Conclusions MR-MT assessment of circumferential mechanical dyssynchrony predicts improvement in

  16. Applying Chemical Imaging Analysis to Improve Our Understanding of Cold Cloud Formation

    NASA Astrophysics Data System (ADS)

    Laskin, A.; Knopf, D. A.; Wang, B.; Alpert, P. A.; Roedel, T.; Gilles, M. K.; Moffet, R.; Tivanski, A.

    2012-12-01

    The impact that atmospheric ice nucleation has on the global radiation budget is one of the least understood problems in atmospheric sciences. This is in part due to the incomplete understanding of various ice nucleation pathways that lead to ice crystal formation from pre-existing aerosol particles. Studies investigating the ice nucleation propensity of laboratory generated particles indicate that individual particle types are highly selective in their ice nucleating efficiency. This description of heterogeneous ice nucleation would present a challenge when applying to the atmosphere which contains a complex mixture of particles. Here, we employ a combination of micro-spectroscopic and optical single particle analytical methods to relate particle physical and chemical properties with observed water uptake and ice nucleation. Field-collected particles from urban environments impacted by anthropogenic and marine emissions and aging processes are investigated. Single particle characterization is provided by computer controlled scanning electron microscopy with energy dispersive analysis of X-rays (CCSEM/EDX) and scanning transmission X-ray microscopy with near edge X-ray absorption fine structure spectroscopy (STXM/NEXAFS). A particle-on-substrate approach coupled to a vapor controlled cooling-stage and a microscope system is applied to determine the onsets of water uptake and ice nucleation including immersion freezing and deposition ice nucleation as a function of temperature (T) as low as 200 K and relative humidity (RH) up to water saturation. We observe for urban aerosol particles that for T > 230 K the oxidation level affects initial water uptake and that subsequent immersion freezing depends on particle mixing state, e.g. by the presence of insoluble particles. For T < 230 K the particles initiate deposition ice nucleation well below the homogeneous freezing limit. Particles collected throughout one day for similar meteorological conditions show very similar

  17. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  18. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  19. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    PubMed Central

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  20. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  1. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  2. Principal component analysis with pre-normalization improves the signal-to-noise ratio and image quality in positron emission tomography studies of amyloid deposits in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Razifar, Pasha; Engler, Henry; Blomquist, Gunnar; Ringheim, Anna; Estrada, Sergio; Långström, Bengt; Bergström, Mats

    2009-06-01

    This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [11C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([11C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [11C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.

  3. Principal component analysis with pre-normalization improves the signal-to-noise ratio and image quality in positron emission tomography studies of amyloid deposits in Alzheimer's disease.

    PubMed

    Razifar, Pasha; Engler, Henry; Blomquist, Gunnar; Ringheim, Anna; Estrada, Sergio; Långström, Bengt; Bergström, Mats

    2009-06-01

    This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [(11)C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([(11)C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [(11)C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.

  4. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  5. Oncological image analysis.

    PubMed

    Brady, Sir Michael; Highnam, Ralph; Irving, Benjamin; Schnabel, Julia A

    2016-10-01

    Cancer is one of the world's major healthcare challenges and, as such, an important application of medical image analysis. After a brief introduction to cancer, we summarise some of the major developments in oncological image analysis over the past 20 years, but concentrating those in the authors' laboratories, and then outline opportunities and challenges for the next decade.

  6. Improvements to a Grating-Based Spectral Imaging Microscope and Its Application to Reflectance Analysis of Blue Pen Inks.

    PubMed

    McMillan, Leilani C; Miller, Kathleen P; Webb, Michael R

    2015-08-01

    A modified design of a chromatically resolved optical microscope (CROMoscope), a grating-based spectral imaging microscope, is described. By altering the geometry and adding a beam splitter, a twisting aberration that was present in the first version of the CROMoscope has been removed. Wavelength adjustment has been automated to decrease analysis time. Performance of the new design in transmission-absorption spectroscopy has been evaluated and found to be generally similar to the performance of the previous design. Spectral bandpass was found to be dependent on the sizes of apertures, and the smallest measured spectral bandpass was 1.8 nm with 1.0 mm diameter apertures. Wavelength was found to be very linear with the sine of the grating angle (R(2) = 0.9999995), and wavelength repeatability was found to be much better than the spectral bandpass. Reflectance spectral imaging with a CROMoscope is reported for the first time, and this reflectance spectral imaging was applied to blue ink samples on white paper. As a proof of concept, linear discriminant analysis was used to classify the inks by brand. In a leave-one-out cross-validation, 97.6% of samples were correctly classified. PMID:26162719

  7. Multiresponse Imaging For Improved Resolution

    NASA Technical Reports Server (NTRS)

    Fales, Carl L., Jr.; Huck, Friedrich O.

    1993-01-01

    Images restored with resolution finer than sampling lattice. Multiresponse imaging overcomes sampling-passband constraint and critically constrained only by optical response and sensitivity of image-gathering device. Process consists of multiresponse image gathering and Wiener-matrix restoration. Image-gathering process acquires sequence of images, each with different optical response so within-pass-band and aliased signal components weighted differently. Wiener-matrix filter, in turn, unscrambles within-passband and aliased frequency components of undersampled signal and restores them up to cutoff frequency of optical response.

  8. Principles and clinical applications of image analysis.

    PubMed

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  9. Suggestions for automatic quantitation of endoscopic image analysis to improve detection of small intestinal pathology in celiac disease patients.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-10-01

    Although many groups have attempted to develop an automated computerized method to detect pathology of the small intestinal mucosa caused by celiac disease, the efforts have thus far failed. This is due in part to the occult presence of the disease. When pathological evidence of celiac disease exists in the small bowel it is visually often patchy and subtle. Due to presence of extraneous substances such as air bubbles and opaque fluids, the use of computerized automation methods have only been partially successful in detecting the hallmarks of the disease in the small intestine-villous atrophy, fissuring, and a mottled appearance. By using a variety of computerized techniques and assigning a weight or vote to each technique, it is possible to improve the detection of abnormal regions which are indicative of celiac disease, and of treatment progress in diagnosed patients. Herein a paradigm is suggested for improving the efficacy of automated methods for measuring celiac disease manifestation in the small intestinal mucosa. The suggestions are applicable to both standard and videocapsule endoscopic imaging, since both methods could potentially benefit from computerized quantitation to improve celiac disease diagnosis.

  10. [Improving the surgeon's image: introduction].

    PubMed

    Tajima, Tomoo

    2004-05-01

    The number of medical students who aspire to become surgeons has been decreasing in recent years. With a vicious spiral in the decreasing number and the growing deterioration of surgeons' working conditions, there is fear of deterioration of surgical care and subsequent disintegration of overall health care in Japan. The purpose of this issue is to devise a strategy for improving surgeons' image and their working conditions to attract future medical students. However, we cannot expect a quick cure for the problem of the decreasing number of applicants for surgery since this issue is deeply related to many fundamental problems in the health care system in Japan. The challenge for surgical educators in coming years will be to solve the problem of chronic sleep deprivation and overwork of surgery residents and to develop an efficient program to meet the critical educational needs of surgical residents. To solve this problem it is necessary to ensure well-motivated surgical residents and to develop an integrated research program. No discussion of these issues would be complete without attention to the allocation of scarce medical resources, especially in relation to financial incentives for young surgeons. The authors, who are conscientious representatives of this society, would like to highlight these critical problems and issues that are particularly relevant to our modern surgical practice, and it is our sincere hope that all members of this society fully recognize these critical issues in the Japanese health care system to take leadership in improving the system. With the demonstration of withholding unnecessary medical conducts we may be able to initiate a renewal of the system and eventually to fulfill our dreams of Japan becoming a nation that can attract many patients from all over the world. Furthermore, verification of discipline with quality control and effective surgical treatment is needed to avoid criticism by other disciplines for being a self

  11. Improving automatic analysis of the electrocardiogram acquired during magnetic resonance imaging using magnetic field gradient artefact suppression.

    PubMed

    Abächerli, Roger; Hornaff, Sven; Leber, Remo; Schmid, Hans-Jakob; Felblinger, Jacques

    2006-10-01

    The electrocardiogram (ECG) used for patient monitoring during magnetic resonance imaging (MRI) unfortunately suffers from severe artefacts. These artefacts are due to the special environment of the MRI. Modeling helped in finding solutions for the suppression of these artefacts superimposed on the ECG signal. After we validated the linear and time invariant model for the magnetic field gradient artefact generation, we applied offline and online filters for their suppression. Wiener filtering (offline) helped in generating reference annotations of the ECG beats. In online filtering, the least-mean-square filter suppressed the magnetic field gradient artefacts before the acquired ECG signal was input to the arrhythmia algorithm. Comparing the results of two runs (one run using online filtering and one run without) to our reference annotations, we found an eminent improvement in the arrhythmia module's performance, enabling reliable patient monitoring and MRI synchronization based on the ECG signal. PMID:17015063

  12. Interferogram conditioning for improved Fourier analysis and application to X-ray phase imaging by grating interferometry.

    PubMed

    Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme

    2015-11-01

    An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved.

  13. Application of 3-D digital deconvolution to optically sectioned images for improving the automatic analysis of fluorescent-labeled tumor specimens

    NASA Astrophysics Data System (ADS)

    Lockett, Stephen J.; Jacobson, Kenneth A.; Herman, Brian

    1992-06-01

    The analysis of fluorescent stained clusters of cells has been improved by recording multiple images of the same microscopic scene at different focal planes and then applying a three dimensional (3-D) out of focus background subtraction algorithm. The algorithm significantly reduced the out of focus signal and improved the spatial resolution. The method was tested on specimens of 10 micrometers diameter ((phi) ) beads embedded in agarose and on a 5 micrometers breast tumor section labeled with a fluorescent DNA stain. The images were analyzed using an algorithm for automatically detecting fluorescent objects. The proportion of correctly detected in focus beads and breast nuclei increased from 1/8 to 8/8 and from 56/104 to 81/104 respectively after processing by the subtraction algorithm. Furthermore, the subtraction algorithm reduced the proportion of out of focus relative to in focus total intensity detected in the bead images from 51% to 33%. Further developments of these techniques, that utilize the 3-D point spread function (PSF) of the imaging system and a 3-D segmentation algorithm, should result in the correct detection and precise quantification of virtually all cells in solid tumor specimens. Thus the approach should serve as a highly reliable automated screening method for a wide variety of clinical specimens.

  14. Improved Diagnostics Using Polarization Imaging and Artificial Neural Networks

    PubMed Central

    Xuan, Jianhua; Klimach, Uwe; Zhao, Hongzhi; Chen, Qiushui; Zou, Yingyin; Wang, Yue

    2007-01-01

    In recent years, there has been an increasing interest in studying the propagation of polarized light in biological cells and tissues. This paper presents a novel approach to cell or tissue imaging using a full Stokes imaging system with advanced polarization image analysis algorithms for improved diagnostics. The key component of the Stokes imaging system is the electrically tunable retarder, enabling high-speed operation of the system to acquire four intensity images sequentially. From the acquired intensity images, four Stokes vector images can be computed to obtain complete polarization information. Polarization image analysis algorithms are then developed to analyze Stokes polarization images for cell or tissue classification. Specifically, wavelet transforms are first applied to the Stokes components for initial feature analysis and extraction. Artificial neural networks (ANNs) are then used to extract diagnostic features for improved classification and prediction. In this study, phantom experiments have been conducted using a prototyped Stokes polarization imaging device. In particular, several types of phantoms, consisting of polystyrene latex spheres in various diameters, were prepared to simulate different conditions of epidermal layer of skin. The experimental results from phantom studies and a plant cell study show that the classification performance using Stokes images is significantly improved over that using the intensity image only. PMID:18274657

  15. Improving VERITAS sensitivity by fitting 2D Gaussian image parameters

    NASA Astrophysics Data System (ADS)

    Christiansen, Jodi; VERITAS Collaboration

    2012-12-01

    Our goal is to improve the acceptance and angular resolution of VERITAS by implementing a camera image-fitting algorithm. Elliptical image parameters are extracted from 2D Gaussian distribution fits using a χ2 minimization instead of the standard technique based on the principle moments of an island of pixels above threshold. We optimize the analysis cuts and then characterize the improvements using simulations. We find an improvement of 20% less observing time to reach 5-sigma for weak point sources.

  16. Tissue Doppler Imaging Combined with Advanced 12-Lead ECG Analysis Might Improve Early Diagnosis of Hypertrophic Cardiomyopathy in Childhood

    NASA Technical Reports Server (NTRS)

    Femlund, E.; Schlegel, T.; Liuba, P.

    2011-01-01

    Optimization of early diagnosis of childhood hypertrophic cardiomyopathy (HCM) is essential in lowering the risk of HCM complications. Standard echocardiography (ECHO) has shown to be less sensitive in this regard. In this study, we sought to assess whether spatial QRS-T angle deviation, which has shown to predict HCM in adults with high sensitivity, and myocardial Tissue Doppler Imaging (TDI) could be additional tools in early diagnosis of HCM in childhood. Methods: Children and adolescents with familial HCM (n=10, median age 16, range 5-27 years), and without obvious hypertrophy but with heredity for HCM (n=12, median age 16, range 4-25 years, HCM or sudden death with autopsy-verified HCM in greater than or equal to 1 first-degree relative, HCM-risk) were additionally investigated with TDI and advanced 12-lead ECG analysis using Cardiax(Registered trademark) (IMED Co Ltd, Budapest, Hungary and Houston). Spatial QRS-T angle (SA) was derived from Kors regression-related transformation. Healthy age-matched controls (n=21) were also studied. All participants underwent thorough clinical examination. Results: Spatial QRS-T angle (Figure/ Panel A) and septal E/Ea ratio (Figure/Panel B) were most increased in HCM group as compared to the HCM-risk and control groups (p less than 0.05). Of note, these 2 variables showed a trend toward higher levels in HCM-risk group than in control group (p=0.05 for E/Ea and 0.06 for QRS/T by ANOVA). In a logistic regression model, increased SA and septal E/Ea ratio appeared to significantly predict both the disease (Chi-square in HCM group: 9 and 5, respectively, p less than 0.05 for both) and the risk for HCM (Chi-square in HCM-risk group: 5 and 4 respectively, p less than 0.05 for both), with further increased predictability level when these 2 variables were combined (Chi-square 10 in HCM group, and 7 in HCM-risk group, p less than 0.01 for both). Conclusions: In this small material, Tissue Doppler Imaging and spatial mean QRS-T angle

  17. Image Analysis of Foods.

    PubMed

    Russ, John C

    2015-09-01

    The structure of foods, both natural and processed ones, is controlled by many variables ranging from biology to chemistry and mechanical forces. The structure also controls many of the properties of the food, including consumer acceptance, taste, mouthfeel, appearance, and so on, and nutrition. Imaging provides an important tool for measuring the structure of foods. This includes 2-dimensional (2D) images of surfaces and sections, for example, viewed in a microscope, as well as 3-dimensional (3D) images of internal structure as may be produced by confocal microscopy, or computed tomography and magnetic resonance imaging. The use of images also guides robotics for harvesting and sorting. Processing of images may be needed to calibrate colors, reduce noise, enhance detail, and delineate structure and dimensions. Measurement of structural information such as volume fraction and internal surface areas, as well as the analysis of object size, location, and shape in both 2- and 3-dimensional images is illustrated and described, with primary references and examples from a wide range of applications. PMID:26270611

  18. Improved real-time imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)

    1993-01-01

    An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.

  19. Improved Interactive Medical-Imaging System

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  20. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  1. Advanced endoscopic imaging to improve adenoma detection

    PubMed Central

    Neumann, Helmut; Nägel, Andreas; Buda, Andrea

    2015-01-01

    Advanced endoscopic imaging is revolutionizing our way on how to diagnose and treat colorectal lesions. Within recent years a variety of modern endoscopic imaging techniques was introduced to improve adenoma detection rates. Those include high-definition imaging, dye-less chromoendoscopy techniques and novel, highly flexible endoscopes, some of them equipped with balloons or multiple lenses in order to improve adenoma detection rates. In this review we will focus on the newest developments in the field of colonoscopic imaging to improve adenoma detection rates. Described techniques include high-definition imaging, optical chromoendoscopy techniques, virtual chromoendoscopy techniques, the Third Eye Retroscope and other retroviewing devices, the G-EYE endoscope and the Full Spectrum Endoscopy-system. PMID:25789092

  2. Can coffee improve image guidance?

    NASA Astrophysics Data System (ADS)

    Wirz, Raul; Lathrop, Ray A.; Godage, Isuru S.; Burgner-Kahrs, Jessica; Russell, Paul T.; Webster, Robert J.

    2015-03-01

    Anecdotally, surgeons sometimes observe large errors when using image guidance in endonasal surgery. We hypothesize that one contributing factor is the possibility that operating room personnel might accidentally bump the optically tracked rigid body attached to the patient after registration has been performed. In this paper we explore the registration error at the skull base that can be induced by simulated bumping of the rigid body, and find that large errors can occur when simulated bumps are applied to the rigid body. To address this, we propose a new fixation method for the rigid body based on granular jamming (i.e. using particles like ground coffee). Our results show that our granular jamming fixation prototype reduces registration error by 28%-68% (depending on bump direction) in comparison to a standard Brainlab reference headband.

  3. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  4. IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.

    2010-06-11

    The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve

  5. Improving photoacoustic imaging contrast of brachytherapy seeds

    NASA Astrophysics Data System (ADS)

    Pan, Leo; Baghani, Ali; Rohling, Robert; Abolmaesumi, Purang; Salcudean, Septimiu; Tang, Shuo

    2013-03-01

    Prostate brachytherapy is a form of radiotherapy for treating prostate cancer where the radiation sources are seeds inserted into the prostate. Accurate localization of seeds during prostate brachytherapy is essential to the success of intraoperative treatment planning. The current standard modality used in intraoperative seeds localization is transrectal ultrasound. Transrectal ultrasound, however, suffers in image quality due to several factors such speckle, shadowing, and off-axis seed orientation. Photoacoustic imaging, based on the photoacoustic phenomenon, is an emerging imaging modality. The contrast generating mechanism in photoacoustic imaging is optical absorption that is fundamentally different from conventional B-mode ultrasound which depicts changes in acoustic impedance. A photoacoustic imaging system is developed using a commercial ultrasound system. To improve imaging contrast and depth penetration, absorption enhancing coating is applied to the seeds. In comparison to bare seeds, approximately 18.5 dB increase in signal-to-noise ratio as well as a doubling of imaging depth are achieved. Our results demonstrate that the coating of the seeds can further improve the discernibility of the seeds.

  6. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  7. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  8. Improved Guided Image Fusion for Magnetic Resonance and Computed Tomography Imaging

    PubMed Central

    Jameel, Amina

    2014-01-01

    Improved guided image fusion for magnetic resonance and computed tomography imaging is proposed. Existing guided filtering scheme uses Gaussian filter and two-level weight maps due to which the scheme has limited performance for images having noise. Different modifications in filter (based on linear minimum mean square error estimator) and weight maps (with different levels) are proposed to overcome these limitations. Simulation results based on visual and quantitative analysis show the significance of proposed scheme. PMID:24695586

  9. Improving Performance During Image-Guided Procedures

    PubMed Central

    Duncan, James R.; Tabriz, David

    2015-01-01

    Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628

  10. Do photographic images of pain improve communication during pain consultations?

    PubMed Central

    Padfield, Deborah; Zakrzewska, Joanna M; de C Williams, Amanda C

    2015-01-01

    BACKGROUND: Visual images may facilitate the communication of pain during consultations. OBJECTIVES: To assess whether photographic images of pain enrich the content and/or process of pain consultation by comparing patients’ and clinicians’ ratings of the consultation experience. METHODS: Photographic images of pain previously co-created by patients with a photographer were provided to new patients attending pain clinic consultations. Seventeen patients selected and used images that best expressed their pain and were compared with 21 patients who were not shown images. Ten clinicians conducted assessments in each condition. After consultation, patients and clinicians completed ratings of aspects of communication and, when images were used, how they influenced the consultation. RESULTS: The majority of both patients and clinicians reported that images enhanced the consultation. Ratings of communication were generally high, with no differences between those with and without images (with the exception of confidence in treatment plan, which was rated more highly in the image group). However, patients’ and clinicians’ ratings of communication were inversely related only in consultations with images. Methodological shortcomings may underlie the present findings of no difference. It is also possible that using images raised patients’ and clinicians’ expectations and encouraged emotional disclosure, in response to which clinicians were dissatisfied with their performance. CONCLUSIONS: Using images in clinical encounters did not have a negative impact on the consultation, nor did it improve communication or satisfaction. These findings will inform future analysis of behaviour in the video-recorded consultations. PMID:25996763

  11. Single Image Dehazing for Visibility Improvement

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Ji, D.

    2015-08-01

    Images captured in foggy weather conditions often suffer from poor visibility, which will create a lot of impacts on the outdoor computer vision systems, such as video surveillance, intelligent transportation assistance system, remote sensing space cameras and so on. In this paper, we propose a new transmission estimated method to improve the visibility of single input image (with fog or haze), as well as the image's details. Our approach stems from two important statistical observations about haze-free images and the haze itself. First, the famous dark channel prior, a statistics of the haze-free outdoor images, can be used to estimate the thickness of the haze; and second, gradient prior law of transmission maps, which is based on dark channel prior. By integrating these two priors, to estimate the unknown scene transmission map is modeled into a TV-regularization optimization problem. The experimental results show that the proposed approach can effectively improve the visibility and keep the details of fog degraded images in the meanwhile.

  12. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  13. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  14. Improvement in volume estimation from confocal sections after image deconvolution.

    PubMed

    Difato, F; Mazzone, F; Scaglione, S; Fato, M; Beltrame, F; Kubínová, L; Janácek, J; Ramoino, P; Vicidomini, G; Diaspro, A

    2004-06-01

    The confocal microscope can image a specimen in its natural environment forming a 3D image of the whole structure by scanning it and collecting light through a small aperture (pinhole), allowing in vivo and in vitro observations. So far, the confocal fluorescence microscope (CFM) is considered a true volume imager because of the role of the pinhole that rejects information coming from out-of-focus planes. Unfortunately, intrinsic imaging properties of the optical scheme presently employed yield a corrupted image that can hamper quantitative analysis of successive image planes. By a post-image collection restoration, it is possible to obtain an estimate, with respect to a given optimization criterium, of the true object, utilizing the impulse response of system or Point Spread Function (PSF). The PSF can be measured or predicted so as to have a mathematical and physical model of the image-formation process. Further modelling and recording noise as an additive Gaussian process has used the regularized Iterative Constrained Tykhonov Miller (ICTM) restoration algorithm for solving the inverse problem. This algorithm finds the best estimate iteratively searching among the possible positive solutions; in the Fourier domain, such an approach is relatively fast and elegant. In order to compare the effective improvement in the quantitative image information analysis, we measured the volume of reference objects before and after image restoration, using the isotropic Fakir method.

  15. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  16. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur (Inventor); Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  17. Imaging Arrays With Improved Transmit Power Capability

    PubMed Central

    Zipparo, Michael J.; Bing, Kristin F.; Nightingale, Kathy R.

    2010-01-01

    Bonded multilayer ceramics and composites incorporating low-loss piezoceramics have been applied to arrays for ultrasound imaging to improve acoustic transmit power levels and to reduce internal heating. Commercially available hard PZT from multiple vendors has been characterized for microstructure, ability to be processed, and electroacoustic properties. Multilayers using the best materials demonstrate the tradeoffs compared with the softer PZT5-H typically used for imaging arrays. Three-layer PZT4 composites exhibit an effective dielectric constant that is three times that of single layer PZT5H, a 50% higher mechanical Q, a 30% lower acoustic impedance, and only a 10% lower coupling coefficient. Application of low-loss multilayers to linear phased and large curved arrays results in equivalent or better element performance. A 3-layer PZT4 composite array achieved the same transmit intensity at 40% lower transmit voltage and with a 35% lower face temperature increase than the PZT-5 control. Although B-mode images show similar quality, acoustic radiation force impulse (ARFI) images show increased displacement for a given drive voltage. An increased failure rate for the multilayers following extended operation indicates that further development of the bond process will be necessary. In conclusion, bonded multilayer ceramics and composites allow additional design freedom to optimize arrays and improve the overall performance for increased acoustic output while maintaining image quality. PMID:20875996

  18. Improved terahertz imaging with a sparse synthetic aperture array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuopeng; Buma, Takashi

    2010-02-01

    Sparse arrays are highly attractive for implementing two-dimensional arrays, but come at the cost of degraded image quality. We demonstrate significantly improved performance by exploiting the coherent ultrawideband nature of singlecycle THz pulses. We compute two weighting factors to each time-delayed signal before final summation to form the reconstructed image. The first factor employs cross-correlation analysis to measure the degree of walk-off between timedelayed signals of neighboring elements. The second factor measures the spatial coherence of the time-delayed delayed signals. Synthetic aperture imaging experiments are performed with a THz time-domain system employing a mechanically scanned single transceiver element. Cross-sectional imaging of wire targets is performed with a onedimensional sparse array with an inter-element spacing of 1.36 mm (over four λ at 1 THz). The proposed image reconstruction technique improves image contrast by 15 dB, which is impressive considering the relatively few elements in the array. En-face imaging of a razor blade is also demonstrated with a 56 x 56 element two-dimensional array, showing reduced image artifacts with adaptive reconstruction. These encouraging results suggest that the proposed image reconstruction technique can be highly beneficial to the development of large area two-dimensional THz arrays.

  19. Improving secondary ion mass spectrometry image quality with image fusion.

    PubMed

    Tarolli, Jay G; Jackson, Lauren M; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  20. Improving secondary ion mass spectrometry image quality with image fusion.

    PubMed

    Tarolli, Jay G; Jackson, Lauren M; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  1. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. PMID:27503078

  2. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  3. Improved target identification using synthetic infrared images

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Penn, Joseph A.

    2002-07-01

    The performance of infrared (IR) target identification classifiers, trained on randomly selected subsets of target chips taken from larger databases of either synthetic or measured data, is shown to improve rapidly with increasing subset size. This increase continues until the new data no longer provides additional information, or the classifier can not handle the information, at which point classifier performance levels off. It will also be shown that subsets of data selected with advanced knowledge can significantly outperform randomly selected sets, suggesting that classifier training-sets must be carefully selected if optimal performance is desired. Performance will also be shown to be subject to the quality of data used to train the classifier. Thus while increasing training set size generally improves classifier performance, the level to which the classifier performance can be raised will be shown to depend on the similarity between the training data and testing data. In fact, if the training data to be added to a given set of training data is unlike the testing data, performance will often not improve and may possibly diminish. Having too much data can reduce performance as much as having too little. Our results again demonstrate that an infrared (IR) target-identification classifier, trained on synthetic images of targets and tested on measured images, can perform as well as a classifier trained on measured images alone. We also demonstrate that the combination of the measured and the synthetic image databases can be used to train a classifier whose performance exceeds that of classifiers trained on either database alone. Results suggest that it may be possible to select data subsets from image databases that can optimize target classifiers performance for specific locations and operational scenarios.

  4. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  5. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  6. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  7. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version

  8. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  9. Improving the Blanco Telescope's delivered image quality

    NASA Astrophysics Data System (ADS)

    Abbott, Timothy M. C.; Montane, Andrés; Tighe, Roberto; Walker, Alistair R.; Gregory, Brooke; Smith, R. Christopher; Cisternas, Alfonso

    2010-07-01

    The V. M. Blanco 4-m telescope at Cerro Tololo Inter-American Observatory is undergoing a number of improvements in preparation for the delivery of the Dark Energy Camera. The program includes upgrades having potential to deliver gains in image quality and stability. To this end, we have renovated the support structure of the primary mirror, incorporating innovations to improve both the radial support performance and the registration of the mirror and telescope top end. The resulting opto-mechanical condition of the telescope is described. We also describe some improvements to the environmental control. Upgrades to the telescope control system and measurements of the dome environment are described in separate papers in this conference.

  10. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  11. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  12. Radiologist and automated image analysis

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.

    1999-07-01

    Significant advances are being made in the area of automated medical image analysis. Part of the progress is due to the general advances being made in the types of algorithms used to process images and perform various detection and recognition tasks. A more important reason for this growth in medical image analysis processes, may be due however to a very different reason. The use of computer workstations, digital image acquisition technologies and the use of CRT monitors for display of medical images for primary diagnostic reading is becoming more prevalent in radiology departments around the world. With the advance in computer- based displays, however, has come the realization that displaying images on a CRT monitor is not the same as displaying film on a viewbox. There are perceptual, cognitive and ergonomic issues that must be considered if radiologists are to accept this change in technology and display. The bottom line is that radiologists' performance must be evaluated with these new technologies and image analysis techniques in order to verify that diagnostic performance is at least as good with these new technologies and image analysis procedures as with film-based displays. The goal of this paper is to address some of the perceptual, cognitive and ergonomic issues associated with reading radiographic images from digital displays.

  13. Narrow-band imaging does not improve detection of colorectal polyps when compared to conventional colonoscopy: a randomized controlled trial and meta-analysis of published studies

    PubMed Central

    2011-01-01

    Background A colonoscopy may frequently miss polyps and cancers. A number of techniques have emerged to improve visualization and to reduce the rate of adenoma miss. Methods We conducted a randomized controlled trial (RCT) in two clinics of the Gastrointestinal Department of the Sanitas University Foundation in Bogota, Colombia. Eligible adult patients presenting for screening or diagnostic elective colonoscopy were randomlsy allocated to undergo conventional colonoscopy or narrow-band imaging (NBI) during instrument withdrawal by three experienced endoscopists. For the systematic review, studies were identified from the Cochrane Library, PUBMED and LILACS and assessed using the Cochrane risk of bias tool. Results We enrolled a total of 482 patients (62.5% female), with a mean age of 58.33 years (SD 12.91); 241 into the intervention (NBI) colonoscopy and 241 into the conventional colonoscopy group. Most patients presented for diagnostic colonoscopy (75.3%). The overall rate of polyp detection was significantly higher in the conventional group compared to the NBI group (RR 0.75, 95%CI 0.60 to 0.96). However, no significant differences were found in the mean number of polyps (MD -0.1; 95%CI -0.25 to 0.05), and the mean number of adenomas (MD 0.04 95%CI -0.09 to 0.17). Meta-analysis of studies (regardless of indication) did not find any significant differences in the mean number of polyps (5 RCT, 2479 participants; WMD -0.07 95% CI -0.21 to 0.07; I2 68%), the mean number of adenomas (8 RCT, 3517 participants; WMD -0.08 95% CI -0.17; 0.01 to I2 62%) and the rate of patients with at least one adenoma (8 RCT, 3512 participants, RR 0.96 95% CI 0.88 to 1,04;I2 0%). Conclusion NBI does not improve detection of colorectal polyps when compared to conventional colonoscopy (Australian New Zealand Clinical Trials Registry ACTRN12610000456055). PMID:21943365

  14. Digital TV image quality improvement considering distributions of edge characteristic

    NASA Astrophysics Data System (ADS)

    Hong, Sang-Gi; Kim, Jae-Chul; Park, Jong-Hyun

    2003-12-01

    Sharpness enhancement is widely used technique for improving the perceptual quality of an image by emphasizing its high-frequency component. In this paper, a psychophysical experiment is conducted by the 20 observers with simple linear unsharp masking for sharpness enhancement. The experimental result is extracted using z-score analysis and linear regression. Finally using this result we suggest observer preferable sharpness enhancement method for digital television.

  15. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  16. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  17. Image Analysis in Surgical Pathology.

    PubMed

    Lloyd, Mark C; Monaco, James P; Bui, Marilyn M

    2016-06-01

    Digitization of glass slides of surgical pathology samples facilitates a number of value-added capabilities beyond what a pathologist could previously do with a microscope. Image analysis is one of the most fundamental opportunities to leverage the advantages that digital pathology provides. The ability to quantify aspects of a digital image is an extraordinary opportunity to collect data with exquisite accuracy and reliability. In this review, we describe the history of image analysis in pathology and the present state of technology processes as well as examples of research and clinical use. PMID:27241112

  18. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  19. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  20. Basic image analysis and manipulation in ImageJ.

    PubMed

    Hartig, Sean M

    2013-01-01

    Image analysis methods have been developed to provide quantitative assessment of microscopy data. In this unit, basic aspects of image analysis are outlined, including software installation, data import, image processing functions, and analytical tools that can be used to extract information from microscopy data using ImageJ. Step-by-step protocols for analyzing objects in a fluorescence image and extracting information from two-color tissue images collected by bright-field microscopy are included.

  1. Automatic analysis of macroarrays images.

    PubMed

    Caridade, C R; Marcal, A S; Mendonca, T; Albuquerque, P; Mendes, M V; Tavares, F

    2010-01-01

    The analysis of dot blot (macroarray) images is currently based on the human identification of positive/negative dots, which is a subjective and time consuming process. This paper presents a system for the automatic analysis of dot blot images, using a pre-defined grid of markers, including a number of ON and OFF controls. The geometric deformations of the input image are corrected, and the individual markers detected, both tasks fully automatically. Based on a previous training stage, the probability for each marker to be ON is established. This information is provided together with quality parameters for training, noise and classification, allowing for a fully automatic evaluation of a dot blot image. PMID:21097139

  2. Figure content analysis for improved biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Apostolova, Emilia; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2009-01-01

    Biomedical images are invaluable in medical education and establishing clinical diagnosis. Clinical decision support (CDS) can be improved by combining biomedical text with automatically annotated images extracted from relevant biomedical publications. In a previous study we reported 76.6% accuracy using supervised machine learning on the feasibility of automatically classifying images by combining figure captions and image content for usefulness in finding clinical evidence. Image content extraction is traditionally applied on entire images or on pre-determined image regions. Figure images articles vary greatly limiting benefit of whole image extraction beyond gross categorization for CDS due to the large variety. However, text annotations and pointers on them indicate regions of interest (ROI) that are then referenced in the caption or discussion in the article text. We have previously reported 72.02% accuracy in text and symbols localization but we failed to take advantage of the referenced image locality. In this work we combine article text analysis and figure image analysis for localizing pointer (arrows, symbols) to extract ROI pointed that can then be used to measure meaningful image content and associate it with the identified biomedical concepts for improved (text and image) content-based retrieval of biomedical articles. Biomedical concepts are identified using National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. Our methods report an average precision and recall of 92.3% and 75.3%, respectively on identifying pointing symbols in images from a randomly selected image subset made available through the ImageCLEF 2008 campaign.

  3. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  4. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  5. Denoising of Ultrasound Cervix Image Using Improved Anisotropic Diffusion Filter

    PubMed Central

    Rose, R Jemila; Allwin, S

    2015-01-01

    ABSTRACT Objective: The purpose of this study was to evaluate an improved oriented speckle reducing anisotropic diffusion (IADF) filter that suppress the speckle noise from ultrasound B-mode images and shows better result than previous filters such as anisotropic diffusion, wavelet denoising and local statistics. Methods: The clinical ultrasound images of the cervix were obtained by ATL HDI 5000 ultrasound machine from the Regional Cancer Centre, Medical College campus, Thiruvananthapuram. The standardized ways of organizing and storing the image were in the format of bmp and the dimensions of 256 × 256 with the help of an improved oriented speckle reducing anisotropic diffusion filter. For analysis, 24 ultrasound cervix images were tested and the performance measured. Results: This provides quality metrics in the case of maximum peak signal-to-noise ratio (PSNR) of 31 dB, structural similarity index map (SSIM) of 0.88 and edge preservation accuracy of 88%. Conclusion: The IADF filter is the optimal method and it is capable of strong speckle suppression with less computational complexity. PMID:26624591

  6. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  7. Super-resolution analysis of microwave image using WFIPOCS

    NASA Astrophysics Data System (ADS)

    Wang, Xue; Wu, Jin

    2013-03-01

    Microwave images are always blurred and distorted. Super-resolution analysis is crucial in microwave image processing. In this paper, we propose the WFIPOCS algorithm, which represents the wavelet-based fractal interpolation incorporates the improved projection onto convex sets (IPOCS) technique. Firstly, we apply down sampling and wiener filtering to a low resolution (LR) microwave image. Then, the wavelet-based fractal interpolation is applied to preprocess the LR image. Finally, the IPOCS technique is applied to solve the problems arisen by interpolation and to approach a high resolution (HR) image. The experimental results indicate that the WFIPOCS algorithm improves spatial resolution of microwave images.

  8. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  9. An improved visual enhancement method for color images

    NASA Astrophysics Data System (ADS)

    Wang, Wenhan; Zhang, Biao

    2013-07-01

    The paper presents an improved method for color image enhancement based on dynamic range compression and shadow compensation method proposed by Felix Albu et al. The improved method consists of not only the dynamic range compression and shadow compensation but also intensity contrast stretching implemented under the logarithmic image processing (LIP) model. The previous method enhance image while preserve image details and color information without generating visual artifacts. On the premise of remaining the advantages of the previous one, our improved method enhances the intensity of the whole image especially the low-light areas. The experimental results illustrate the effectiveness of our proposed method and the superiority over the previous one.

  10. Independent component analysis for improving the quality of interferometric products

    NASA Astrophysics Data System (ADS)

    Saqellari Likoka, A.; Vafeiadi-Bila, E.; Karathanassi, V.

    2016-05-01

    The accuracy of InSAR DEMs is affected by the temporal decorrelation of SAR images which is due to atmosphere, land use/cover, soil moisture, and roughness changes. Elimination of the temporal decorrelation of the master and slave image improves the DEMs accuracy. In this study, the Independent Component Analysis was applied before interferometric process. It was observed that using three ICA entries, ICA independent sources can be interpreted as background and changed images. ICA when performed on the master and slave images using the same couple of additional images produces two background images which enable the production of high quality DEMs. However, limitations exist in the proposed approach.

  11. Hybrid µCT-FMT imaging and image analysis

    PubMed Central

    Zafarnia, Sara; Babler, Anne; Jahnen-Dechent, Willi; Lammers, Twan; Lederle, Wiltrud; Kiessling, Fabian

    2015-01-01

    Fluorescence-mediated tomography (FMT) enables longitudinal and quantitative determination of the fluorescence distribution in vivo and can be used to assess the biodistribution of novel probes and to assess disease progression using established molecular probes or reporter genes. The combination with an anatomical modality, e.g., micro computed tomography (µCT), is beneficial for image analysis and for fluorescence reconstruction. We describe a protocol for multimodal µCT-FMT imaging including the image processing steps necessary to extract quantitative measurements. After preparing the mice and performing the imaging, the multimodal data sets are registered. Subsequently, an improved fluorescence reconstruction is performed, which takes into account the shape of the mouse. For quantitative analysis, organ segmentations are generated based on the anatomical data using our interactive segmentation tool. Finally, the biodistribution curves are generated using a batch-processing feature. We show the applicability of the method by assessing the biodistribution of a well-known probe that binds to bones and joints. PMID:26066033

  12. Improving resolution of optical coherence tomography for imaging of microstructures

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Lu, Hui; Wang, James H.; Wang, Michael R.

    2015-03-01

    Multi-frame superresolution technique has been used to improve the lateral resolution of spectral domain optical coherence tomography (SD-OCT) for imaging of 3D microstructures. By adjusting the voltages applied to ? and ? galvanometer scanners in the measurement arm, small lateral imaging positional shifts have been introduced among different C-scans. Utilizing the extracted ?-? plane en face image frames from these specially offset C-scan image sets at the same axial position, we have reconstructed the lateral high resolution image by the efficient multi-frame superresolution technique. To further improve the image quality, we applied the latest K-SVD and bilateral total variation denoising algorithms to the raw SD-OCT lateral images before and along with the superresolution processing, respectively. The performance of the SD-OCT of improved lateral resolution is demonstrated by 3D imaging a microstructure fabricated by photolithography and a double-layer microfluidic device.

  13. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  14. Method for improving visualization of infrared images

    NASA Astrophysics Data System (ADS)

    Cimbalista, Mario

    2014-05-01

    Thermography has an extremely important difference from the other visual image converting electronic systems, like XRays or ultrasound: the infrared camera operator usually spend hour after hour with his/her eyes looking only at infrared images, sometimes several intermittent hours a day if not six or more continuous hours. This operational characteristic has a very important impact on yield, precision, errors and misinterpretation of the infrared images contents. Despite a great hardware development over the last fifty years, quality infrared thermography still lacks for a solution for these problems. The human eye physiology has not evolved to see infrared radiation neither the mind-brain has the capability to understand and decode infrared information. Chemical processes inside the human eye and functional cells distributions as well as cognitive-perceptual impact of images plays a crucial role in the perception, detection, and other steps of dealing with infrared images. The system presented here, called ThermoScala and patented in USA solves this problem using a coding process applicable to an original infrared image, generated from any value matrix, from any kind of infrared camera to make it much more suitable for human usage, causing a substantial difference in the way the retina and the brain processes the resultant images. The result obtained is a much less exhaustive way to see, identify and interpret infrared images generated by any infrared camera that uses this conversion process.

  15. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  16. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  17. Improved cancer diagnostics by different image processing techniques on OCT images

    NASA Astrophysics Data System (ADS)

    Kanawade, Rajesh; Lengenfelder, Benjamin; Marini Menezes, Tassiana; Hohmann, Martin; Kopfinger, Stefan; Hohmann, Tim; Grabiec, Urszula; Klämpfl, Florian; Gonzales Menezes, Jean; Waldner, Maximilian; Schmidt, Michael

    2015-07-01

    Optical-coherence tomography (OCT) is a promising non-invasive, high-resolution imaging modality which can be used for cancer diagnosis and its therapeutic assessment. However, speckle noise makes detection of cancer boundaries and image segmentation problematic and unreliable. Therefore, to improve the image analysis for a precise cancer border detection, the performance of different image processing algorithms such as mean, median, hybrid median filter and rotational kernel transformation (RKT) for this task is investigated. This is done on OCT images acquired from an ex-vivo human cancerous mucosa and in vitro by using cultivated tumour applied on organotypical hippocampal slice cultures. The preliminary results confirm that the border between the healthy and the cancer lesions can be identified precisely. The obtained results are verified with fluorescence microscopy. This research can improve cancer diagnosis and the detection of borders between healthy and cancerous tissue. Thus, it could also reduce the number of biopsies required during screening endoscopy by providing better guidance to the physician.

  18. An improved fusion algorithm for infrared and visible images based on multi-scale transform

    NASA Astrophysics Data System (ADS)

    Li, He; Liu, Lei; Huang, Wei; Yue, Chao

    2016-01-01

    In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.

  19. Institutional Image: How to Define, Improve, Market It.

    ERIC Educational Resources Information Center

    Topor, Robert S.

    Advice for colleges on how to identify, develop, and communicate a positive image for the institution is offered in this handbook. The use of market research techniques to measure image is discussed along with advice on how to improve an image so that it contributes to a unified marketing plan. The first objective is to create and communicate some…

  20. Improving image segmentation by learning region affinities

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  1. Target identification by image analysis.

    PubMed

    Fetz, V; Prochnow, H; Brönstrup, M; Sasse, F

    2016-05-01

    Covering: 1997 to the end of 2015Each biologically active compound induces phenotypic changes in target cells that are characteristic for its mode of action. These phenotypic alterations can be directly observed under the microscope or made visible by labelling structural elements or selected proteins of the cells with dyes. A comparison of the cellular phenotype induced by a compound of interest with the phenotypes of reference compounds with known cellular targets allows predicting its mode of action. While this approach has been successfully applied to the characterization of natural products based on a visual inspection of images, recent studies used automated microscopy and analysis software to increase speed and to reduce subjective interpretation. In this review, we give a general outline of the workflow for manual and automated image analysis, and we highlight natural products whose bacterial and eucaryotic targets could be identified through such approaches. PMID:26777141

  2. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  3. Improved microgrid arrangement for integrated imaging polarimeters.

    PubMed

    LeMaster, Daniel A; Hirakawa, Keigo

    2014-04-01

    For almost 20 years, microgrid polarimetric imaging systems have been built using a 2×2 repeating pattern of polarization analyzers. In this Letter, we show that superior spatial resolution is achieved over this 2×2 case when the analyzers are arranged in a 2×4 repeating pattern. This unconventional result, in which a more distributed sampling pattern results in finer spatial resolution, is also achieved without affecting the conditioning of the polarimetric data-reduction matrix. Proof is provided theoretically and through Stokes image reconstruction of synthesized data. PMID:24686611

  4. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  5. Image enhancement algorithm based on improved lateral inhibition network

    NASA Astrophysics Data System (ADS)

    Yun, Haijiao; Wu, Zhiyong; Wang, Guanjun; Tong, Gang; Yang, Hua

    2016-05-01

    There is often substantial noise and blurred details in the images captured by cameras. To solve this problem, we propose a novel image enhancement algorithm combined with an improved lateral inhibition network. Firstly, we built a mathematical model of a lateral inhibition network in conjunction with biological visual perception; this model helped to realize enhanced contrast and improved edge definition in images. Secondly, we proposed that the adaptive lateral inhibition coefficient adhere to an exponential distribution thus making the model more flexible and more universal. Finally, we added median filtering and a compensation measure factor to build the framework with high pass filtering functionality thus eliminating image noise and improving edge contrast, addressing problems with blurred image edges. Our experimental results show that our algorithm is able to eliminate noise and the blurring phenomena, and enhance the details of visible and infrared images.

  6. Quantitative image analysis of celiac disease

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  7. Quantitative image analysis of celiac disease.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-03-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients.

  8. Characterisation of mycelial morphology using image analysis.

    PubMed

    Paul, G C; Thomas, C R

    1998-01-01

    Image analysis is now well established in quantifying and characterising microorganisms from fermentation samples. In filamentous fermentations it has become an invaluable tool for characterising complex mycelial morphologies, although it is not yet used extensively in industry. Recent method developments include characterisation of spore germination from the inoculum stage and of the subsequent dispersed and pellet forms. Further methods include characterising vacuolation and simple structural differentiation of mycelia, also from submerged cultures. Image analysis can provide better understanding of the development of mycelial morphology, of the physiological states of the microorganisms in the fermenter, and of their interactions with the fermentation conditions. This understanding should lead to improved design and operation of mycelial fermentations. PMID:9468800

  9. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  10. Scanning probe image wizard: a toolbox for automated scanning probe microscopy data analysis.

    PubMed

    Stirling, Julian; Woolley, Richard A J; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  11. Scanning probe image wizard: A toolbox for automated scanning probe microscopy data analysis

    NASA Astrophysics Data System (ADS)

    Stirling, Julian; Woolley, Richard A. J.; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  12. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  13. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  14. Imaging system design for improved information capacity

    NASA Technical Reports Server (NTRS)

    Fales, C. L.; Huck, F. O.; Samms, R. W.

    1984-01-01

    Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.

  15. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  16. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  17. RAMAN spectroscopy imaging improves the diagnosis of papillary thyroid carcinoma

    PubMed Central

    Rau, Julietta V.; Graziani, Valerio; Fosca, Marco; Taffon, Chiara; Rocchia, Massimiliano; Crucitti, Pierfilippo; Pozzilli, Paolo; Onetti Muda, Andrea; Caricato, Marco; Crescenzi, Anna

    2016-01-01

    Recent investigations strongly suggest that Raman spectroscopy (RS) can be used as a clinical tool in cancer diagnosis to improve diagnostic accuracy. In this study, we evaluated the efficiency of Raman imaging microscopy to discriminate between healthy and neoplastic thyroid tissue, by analyzing main variants of Papillary Thyroid Carcinoma (PTC), the most common type of thyroid cancer. We performed Raman imaging of large tissue areas (from 100 × 100 μm2 up to 1 × 1 mm2), collecting 38 maps containing about 9000 Raman spectra. Multivariate statistical methods, including Linear Discriminant Analysis (LDA), were applied to translate Raman spectra differences between healthy and PTC tissues into diagnostically useful information for a reliable tissue classification. Our study is the first demonstration of specific biochemical features of the PTC profile, characterized by significant presence of carotenoids with respect to the healthy tissue. Moreover, this is the first evidence of Raman spectra differentiation between classical and follicular variant of PTC, discriminated by LDA with high efficiency. The combined histological and Raman microscopy analyses allow clear-cut integration of morphological and biochemical observations, with dramatic improvement of efficiency and reliability in the differential diagnosis of neoplastic thyroid nodules, paving the way to integrative findings for tumorigenesis and novel therapeutic strategies. PMID:27725756

  18. RAMAN spectroscopy imaging improves the diagnosis of papillary thyroid carcinoma

    NASA Astrophysics Data System (ADS)

    Rau, Julietta V.; Graziani, Valerio; Fosca, Marco; Taffon, Chiara; Rocchia, Massimiliano; Crucitti, Pierfilippo; Pozzilli, Paolo; Onetti Muda, Andrea; Caricato, Marco; Crescenzi, Anna

    2016-10-01

    Recent investigations strongly suggest that Raman spectroscopy (RS) can be used as a clinical tool in cancer diagnosis to improve diagnostic accuracy. In this study, we evaluated the efficiency of Raman imaging microscopy to discriminate between healthy and neoplastic thyroid tissue, by analyzing main variants of Papillary Thyroid Carcinoma (PTC), the most common type of thyroid cancer. We performed Raman imaging of large tissue areas (from 100 × 100 μm2 up to 1 × 1 mm2), collecting 38 maps containing about 9000 Raman spectra. Multivariate statistical methods, including Linear Discriminant Analysis (LDA), were applied to translate Raman spectra differences between healthy and PTC tissues into diagnostically useful information for a reliable tissue classification. Our study is the first demonstration of specific biochemical features of the PTC profile, characterized by significant presence of carotenoids with respect to the healthy tissue. Moreover, this is the first evidence of Raman spectra differentiation between classical and follicular variant of PTC, discriminated by LDA with high efficiency. The combined histological and Raman microscopy analyses allow clear-cut integration of morphological and biochemical observations, with dramatic improvement of efficiency and reliability in the differential diagnosis of neoplastic thyroid nodules, paving the way to integrative findings for tumorigenesis and novel therapeutic strategies.

  19. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  20. Improved visibility of colorectal flat tumors using image-enhanced endoscopy.

    PubMed

    Oka, Shiro; Tamai, Naoto; Ikematsu, Hiroaki; Kawamura, Takuji; Sawaya, Manabu; Takeuchi, Yoji; Uraoka, Toshio; Moriyama, Tomohiko; Kawano, Hiroshi; Matsuda, Takahisa

    2015-04-01

    Colonoscopy is considered the gold standard for detecting colorectal tumors; however, conventional colonoscopy can miss flat tumors. We aimed to determine whether visualization of colorectal flat lesions was improved by autofluorescence imaging and narrow-band imaging image analysis in conjunction with a new endoscopy system. Eight physicians compared autofluorescent, narrow-band, and chromoendoscopy images to 30 corresponding white-light images of flat tumors. Physicians rated tumor visibility from each image set as follows: +2 (improved), +1 (somewhat improved), 0 (equivalent to white light), -1 (somewhat decreased), and -2 (decreased). The eight scores for each image were totalled and evaluated. Interobserver agreement was also examined. Autofluorescent, narrow-band, and chromoendoscopy images showed improvements of 63.3% (19/30), 6.7% (2/30), and 73.3% (22/30), respectively, with no instances of decreased visibility. Autofluorescence scores were generally greater than narrow-band scores. Interobserver agreement was 0.65 for autofluorescence, 0.80 for narrow-band imaging, and 0.70 for chromoendoscopy. In conclusion, using a new endoscopy system in conjunction with autofluorescent imaging improved visibility of colorectal flat tumors, equivalent to the visibility achieved using chromoendoscopy.

  1. Improving the Quality of Imaging in the Emergency Department.

    PubMed

    Blackmore, C Craig; Castro, Alexandra

    2015-12-01

    Imaging is critical for the care of emergency department (ED) patients. However, much of the imaging performed for acute care today is overutilization, creating substantial cost without significant benefit. Further, the value of imaging is not easily defined, as imaging only affects outcomes indirectly, through interaction with treatment. Improving the quality, including appropriateness, of emergency imaging requires understanding of how imaging contributes to patient care. The six-tier efficacy hierarchy of Fryback and Thornbury enables understanding of the value of imaging on multiple levels, ranging from technical efficacy to medical decision-making and higher-level patient and societal outcomes. The imaging efficacy hierarchy also allows definition of imaging quality through the Institute of Medicine (IOM)'s quality domains of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equitability and provides a foundation for quality improvement. In this article, the authors elucidate the Fryback and Thornbury framework to define the value of imaging in the ED and to relate emergency imaging to the IOM quality domains.

  2. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  3. Improving a DWT-based compression algorithm for high image-quality requirement of satellite images

    NASA Astrophysics Data System (ADS)

    Thiebaut, Carole; Latry, Christophe; Camarero, Roberto; Cazanave, Grégory

    2011-10-01

    Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment. Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The complexity of the proposed improvement for on-board application has also been analysed.

  4. Advances in Image Pre-Processing to Improve Automated 3d Reconstruction

    NASA Astrophysics Data System (ADS)

    Ballabeni, A.; Apollonio, F. I.; Gaiani, M.; Remondino, F.

    2015-02-01

    Tools and algorithms for automated image processing and 3D reconstruction purposes have become more and more available, giving the possibility to process any dataset of unoriented and markerless images. Typically, dense 3D point clouds (or texture 3D polygonal models) are produced at reasonable processing time. In this paper, we evaluate how the radiometric pre-processing of image datasets (particularly in RAW format) can help in improving the performances of state-of-the-art automated image processing tools. Beside a review of common pre-processing methods, an efficient pipeline based on color enhancement, image denoising, RGB to Gray conversion and image content enrichment is presented. The performed tests, partly reported for sake of space, demonstrate how an effective image pre-processing, which considers the entire dataset in analysis, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in case of poor texture scenarios.

  5. Improvement of fringe quality at LDA measuring volume using compact two hololens imaging system

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.

    2015-03-01

    Design, analysis and construction of an LDA optical setup using conventional as well as compact two hololens imaging system have been performed. Fringes formed at measurement volume by both the imaging systems have been recorded. After experimentally analyzing these fringes, it is found that fringes obtained using compact two hololens imaging system get improved both qualitatively and quantitatively compared to that obtained using conventional imaging system. Hence it is concluded that use of the compact two hololens imaging system for making LDA optical setup is a better choice over the conventional one.

  6. Improved Imaging With Laser-Induced Eddy Currents

    NASA Technical Reports Server (NTRS)

    Chern, Engmin J.

    1993-01-01

    System tests specimen of material nondestructively by laser-induced eddy-current imaging improved by changing method of processing of eddy-current signal. Changes in impedance of eddy-current coil measured in absolute instead of relative units.

  7. Statistical analysis of biophoton image

    NASA Astrophysics Data System (ADS)

    Wang, Susheng

    1998-08-01

    A photon count image system has been developed to obtain the ultra-weak bioluminescence image. The photon images of some plant, animal and human hand have been detected. The biophoton image is different from usual image. In this paper three characteristics of biophoton image are analyzed. On the basis of these characteristics the detected probability and detected limit of photon count image system, detected limit of biophoton image have been discussed. These researches provide scientific basis for experiments design and photon image processing.

  8. Computer analysis of mammography phantom images (CAMPI)

    NASA Astrophysics Data System (ADS)

    Chakraborty, Dev P.

    1997-05-01

    Computer analysis of mammography phantom images (CAMPI) is a method for objective and precise measurements of phantom image quality in mammography. This investigation applied CAMPI methodology to the Fischer Mammotest Stereotactic Digital Biopsy machine. Images of an American College of Radiology phantom centered on the largest two microcalcification groups were obtained on this machine under a variety of x-ray conditions. Analyses of the images revealed that the precise behavior of the CAMPI measures could be understood from basic imaging physics principles. We conclude that CAMPI is sensitive to subtle image quality changes and can perform accurate evaluations of images, especially of directly acquired digital images.

  9. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    SciTech Connect

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  10. Photoacoustic imaging with rotational compounding for improved signal detection

    NASA Astrophysics Data System (ADS)

    Forbrich, A.; Heinmiller, A.; Jose, J.; Needles, A.; Hirson, D.

    2015-03-01

    Photoacoustic microscopy with linear array transducers enables fast two-dimensional, cross-sectional photoacoustic imaging. Unfortunately, most ultrasound transducers are only sensitive to a very narrow angular acceptance range and preferentially detect signals along the main axis of the transducer. This often limits photoacoustic microscopy from detecting blood vessels which can extend in any direction. Rotational compounded photoacoustic imaging is introduced to overcome the angular-dependency of detecting acoustic signals with linear array transducers. An integrate system is designed to control the image acquisition using a linear array transducer, a motorized rotational stage, and a motorized lateral stage. Images acquired at multiple angular positions are combined to form a rotational compounded image. We found that the signal-to-noise ratio improved, while the sidelobe and reverberation artifacts were substantially reduced. Furthermore, the rotational compounded images of excised kidneys and hindlimb tumors of mice showed more structural information compared with any single image collected.

  11. High resolution PET breast imager with improved detection efficiency

    DOEpatents

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  12. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  13. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  14. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  15. An improved active imaging method for upgrading low-light-level image detection sensitivity

    NASA Astrophysics Data System (ADS)

    Tang, Hongying

    2013-09-01

    Active imaging is an essential tool for low-light-level imaging. However, it has some drawbacks, such as limited imaging range and lack of security. We optimize the imaging approach by casting a saw-tooth wave auxiliary light signal over the sensor. Here, the auxiliary signal is superposed with a low-light-level signal, which is too weak to be measured by the sensor. After acquiring a superimposed image set in one saw-tooth wave circle, low-light-level image estimation is achieved by implementing least-square algorithm during data processing. This improved method not only makes active imaging overcome the drawbacks mentioned above, but also provides a feasible way to improve the low-light-level image detection sensitivity.

  16. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original

  17. Resolution and quantitative accuracy improvements in ultrasound transmission imaging

    NASA Astrophysics Data System (ADS)

    Chenevert, T. L.

    The type of ultrasound transmission imaging, referred to as ultrasonic computed tomography (UCT), reconstructs distributions of tissue speed of sound and sound attenuation properties from measurements of acoustic pulse time of flight (TCF) and energy received through tissue. Although clinical studies with experimental UCT scanners have demonstrated UCT is sensitive to certain tissue pathologies not easily detected with conventional ultrasound imaging, they have also shown UCT to suffer from artifacts due to physical differences between the acoustic beam and its ray model implicit in image reconstruction algorithms. Artifacts are expressed as large quantitative errors in attenuation images, and poor spatial resolution and size distortion (exaggerated size of high speed of sound regions) in speed of sound images. Methods are introduced and investigated which alleviate these problems in UCT imaging by providing improved measurements of pulse TCF and energy.

  18. An improved optical identity authentication system with significant output images

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Liu, Ming-tang; Yao, Shu-xia; Xin, Yan-hui

    2012-06-01

    An improved method for optical identity authentication system with significant output images is proposed. In this method, a predefined image is digitally encoded into two phase-masks relating to a fixed phase-mask, and this fixed phase-mask acts as a lock to the system. When the two phase-masks, serving as the key, are presented to the system, the predefined image is generated at the output. In addition to simple verification, our method is capable of identifying the type of input phase-mask, and the duties of identity verification and recognition are separated and, respectively, assigned to the amplitude and phase of the output image. Numerical simulation results show that our proposed method is feasible and the output image with better image quality can be obtained.

  19. Failure Analysis for Improved Reliability

    NASA Technical Reports Server (NTRS)

    Sood, Bhanu

    2016-01-01

    Outline: Section 1 - What is reliability and root cause? Section 2 - Overview of failure mechanisms. Section 3 - Failure analysis techniques (1. Non destructive analysis techniques, 2. Destructive Analysis, 3. Materials Characterization). Section 4 - Summary and Closure

  20. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  1. Improved biliary detection and diagnosis through intelligent machine analysis.

    PubMed

    Logeswaran, Rajasvaran

    2012-09-01

    This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases.

  2. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  3. Flattening filter removal for improved image quality of megavoltage fluoroscopy

    SciTech Connect

    Christensen, James D.; Kirichenko, Alexander; Gayou, Olivier

    2013-08-15

    Purpose: Removal of the linear accelerator (linac) flattening filter enables a high rate of dose deposition with reduced treatment time. When used for megavoltage imaging, an unflat beam has reduced primary beam scatter resulting in sharper images. In fluoroscopic imaging mode, the unflat beam has higher photon count per image frame yielding higher contrast-to-noise ratio. The authors’ goal was to quantify the effects of an unflat beam on the image quality of megavoltage portal and fluoroscopic images.Methods: 6 MV projection images were acquired in fluoroscopic and portal modes using an electronic flat-panel imager. The effects of the flattening filter on the relative modulation transfer function (MTF) and contrast-to-noise ratio were quantified using the QC3 phantom. The impact of FF removal on the contrast-to-noise ratio of gold fiducial markers also was studied under various scatter conditions.Results: The unflat beam had improved contrast resolution, up to 40% increase in MTF contrast at the highest frequency measured (0.75 line pairs/mm). The contrast-to-noise ratio was increased as expected from the increased photon flux. The visualization of fiducial markers was markedly better using the unflat beam under all scatter conditions, enabling visualization of thin gold fiducial markers, the thinnest of which was not visible using the unflat beam.Conclusions: The removal of the flattening filter from a clinical linac leads to quantifiable improvements in the image quality of megavoltage projection images. These gains enable observers to more easily visualize thin fiducial markers and track their motion on fluoroscopic images.

  4. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. PMID:27344937

  5. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role.

  6. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  7. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  8. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  9. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  10. Improved target detection by IR dual-band image fusion

    NASA Astrophysics Data System (ADS)

    Adomeit, U.; Ebert, R.

    2009-09-01

    Dual-band thermal imagers acquire information simultaneously in both the 8-12 μm (long-wave infrared, LWIR) and the 3-5 μm (mid-wave infrared, MWIR) spectral range. Compared to single-band thermal imagers they are expected to have several advantages in military applications. These advantages include the opportunity to use the best band for given atmospheric conditions (e. g. cold climate: LWIR, hot and humid climate: MWIR), the potential to better detect camouflaged targets and an improved discrimination between targets and decoys. Most of these advantages have not yet been verified and/or quantified. It is expected that image fusion allows better exploitation of the information content available with dual-band imagers especially with respect to detection of targets. We have developed a method for dual-band image fusion based on the apparent temperature differences in the two bands. This method showed promising results in laboratory tests. In order to evaluate its performance under operational conditions we conducted a field trial in an area with high thermal clutter. In such areas, targets are hardly to detect in single-band images because they vanish in the clutter structure. The image data collected in this field trial was used for a perception experiment. This perception experiment showed an enhanced target detection range and reduced false alarm rate for the fused images compared to the single-band images.

  11. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  12. Cascaded diffractive optical elements for improved multiplane image reconstruction.

    PubMed

    Gülses, A Alkan; Jenkins, B Keith

    2013-05-20

    Computer-generated phase-only diffractive optical elements in a cascaded setup are designed by one deterministic and one stochastic algorithm for multiplane image formation. It is hypothesized that increasing the number of elements as wavefront modulators in the longitudinal dimension would enlarge the available solution space, thus enabling enhanced image reconstruction. Numerical results show that increasing the number of holograms improves quality at the output. Design principles, computational methods, and specific conditions are discussed.

  13. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  14. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  15. Applications of process improvement techniques to improve workflow in abdominal imaging.

    PubMed

    Tamm, Eric Peter

    2016-03-01

    Major changes in the management and funding of healthcare are underway that will markedly change the way radiology studies will be reimbursed. The result will be the need to deliver radiology services in a highly efficient manner while maintaining quality. The science of process improvement provides a practical approach to improve the processes utilized in radiology. This article will address in a step-by-step manner how to implement process improvement techniques to improve workflow in abdominal imaging.

  16. Millimeter-wave sensor image analysis

    NASA Technical Reports Server (NTRS)

    Wilson, William J.; Suess, Helmut

    1989-01-01

    Images of an airborne, scanning, radiometer operating at a frequency of 98 GHz, have been analyzed. The mm-wave images were obtained in 1985/1986 using the JPL mm-wave imaging sensor. The goal of this study was to enhance the information content of these images and make their interpretation easier for human analysis. In this paper, a visual interpretative approach was used for information extraction from the images. This included application of nonlinear transform techniques for noise reduction and for color, contrast and edge enhancement. Results of the techniques on selected mm-wave images are presented.

  17. Structural anisotropy quantification improves the final superresolution image of localization microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Yina; Huang, Zhen-li

    2016-07-01

    Superresolution localization microscopy initially produces a dataset of fluorophore coordinates instead of a conventional digital image. Therefore, superresolution localization microscopy requires additional data analysis to present a final superresolution image. However, methods of employing the structural information within the localization dataset to improve the data analysis performance remain poorly developed. Here, we quantify the structural information in a localization dataset using structural anisotropy, and propose to use it as a figure of merit for localization event filtering. With simulated as well as experimental data of a biological specimen, we demonstrate that exploring structural anisotropy has allowed us to obtain superresolution images with a much cleaner background.

  18. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  19. Note: Improving low-light-level image detection sensitivity with higher speed using auxiliary sinusoidal light signal.

    PubMed

    Tang, Hongying; Yu, Zhengtao

    2015-06-01

    An improved active imaging method, which upgraded the detection sensitivity by applying an auxiliary sawtooth wave light signal, was reported. Nevertheless, such method sacrificed the imaging speed. To speed up imaging, a sinusoidal light signal is used instead and superposed with the undetectable low-light-level signal on the image sensor. After acquiring a superimposed image set in one sine wave cycle, an unbiased low-light-level image estimation is obtained by using least-square optimization. Through probabilistic analysis and experimental study, we demonstrate that the sinusoidal signal could improve the detection sensitivity 1/3 faster than the sawtooth wave signal.

  20. Note: Improving low-light-level image detection sensitivity with higher speed using auxiliary sinusoidal light signal

    NASA Astrophysics Data System (ADS)

    Tang, Hongying; Yu, Zhengtao

    2015-06-01

    An improved active imaging method, which upgraded the detection sensitivity by applying an auxiliary sawtooth wave light signal, was reported. Nevertheless, such method sacrificed the imaging speed. To speed up imaging, a sinusoidal light signal is used instead and superposed with the undetectable low-light-level signal on the image sensor. After acquiring a superimposed image set in one sine wave cycle, an unbiased low-light-level image estimation is obtained by using least-square optimization. Through probabilistic analysis and experimental study, we demonstrate that the sinusoidal signal could improve the detection sensitivity 1/3 faster than the sawtooth wave signal.

  1. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization.

  2. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization. PMID:23931513

  3. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code. PMID:26782241

  4. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    PubMed Central

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-01-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck. PMID:27021133

  5. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    NASA Astrophysics Data System (ADS)

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-03-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  6. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence.

    PubMed

    Beijbom, Oscar; Treibitz, Tali; Kline, David I; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B Greg; Kriegman, David

    2016-01-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  7. Temporal resolution improvement using PICCS in MDCT cardiac imaging.

    PubMed

    Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang

    2009-06-01

    The current paradigm for temporal resolution improvement is to add more source-detector units and/or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120 degrees, which is roughly 50% of the standard short-scan angular range (approximately 240 degrees for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher

  8. Improvements to Color HRSC+OMEGA Image Mosaics of Mars

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Audouard, J.; Dumke, A.; Dunker, T.; Gross, C.; Kneissl, T.; Michael, G.; Ody, A.; Poulet, F.; Schreiner, B.; van Gasselt, S.; Walter, S. H. G.; Wendt, L.; Zuschneid, W.

    2015-10-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express (MEx) orbiter has acquired 3640 images (with 'preliminary level 4' processing as described in [1]) of the Martian surface since arriving in orbit in 2003, covering over 90% of the planet [2]. At resolutions that can reach 10 meters/pixel, these MEx/HRSC images [3-4] are constructed in a pushbroom manner from 9 different CCD line sensors, including a panchromatic nadir-looking (Pan) channel, 4 color channels (R, G, B, IR), and 4 other panchromatic channels for stereo imaging or photometric imaging. In [5], we discussed our first approach towards mosaicking hundreds of the MEx/HRSC RGB or Pan images together. The images were acquired under different atmospheric conditions over the entire mission and under different observation/illumination geometries. Therefore, the main challenge that we have addressed is the color (or gray-scale) matching of these images, which have varying colors (or gray scales) due to the different observing conditions. Using this first approach, our best results for a semiglobal mosaic consist of adding a high-pass-filtered version of the HRSC mosaic to a low-pass-filtered version of the MEx/OMEGA [6] global mosaic. Herein, we will present our latest results using a new, improved, second approach for mosaicking MEx/HRSC images [7], but focusing on the RGB Color processing when using this new second approach. Currently, when the new second approach is applied to Pan images, we match local spatial averages of the Pan images to the local spatial averages of a mosaic made from the images acquired by the Mars Global Surveyor TES bolometer. Since these MGS/TES images have already been atmospherically-corrected, this matching allows us to bootstrap the process of mosaicking the HRSC images without actually atmospherically correcting the HRSC images. In this work, we will adapt this technique of MEx/HRSC Pan images being matched with the MGS/TES mosaic, so that instead, MEx/HRSC RGB images

  9. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  10. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  11. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  12. An improved level set method for vertebra CT image segmentation

    PubMed Central

    2013-01-01

    Background Clinical diagnosis and therapy for the lumbar disc herniation requires accurate vertebra segmentation. The complex anatomical structure and the degenerative deformations of the vertebrae makes its segmentation challenging. Methods An improved level set method, namely edge- and region-based level set method (ERBLS), is proposed for vertebra CT images segmentation. By considering the gradient information and local region characteristics of images, the proposed model can efficiently segment images with intensity inhomogeneity and blurry or discontinuous boundaries. To reduce the dependency on manual initialization in many active contour models and for an automatic segmentation, a simple initialization method for the level set function is built, which utilizes the Otsu threshold. In addition, the need of the costly re-initialization procedure is completely eliminated. Results Experimental results on both synthetic and real images demonstrated that the proposed ERBLS model is very robust and efficient. Compared with the well-known local binary fitting (LBF) model, our method is much more computationally efficient and much less sensitive to the initial contour. The proposed method has also applied to 56 patient data sets and produced very promising results. Conclusions An improved level set method suitable for vertebra CT images segmentation is proposed. It has the flexibility of segmenting the vertebra CT images with blurry or discontinuous edges, internal inhomogeneity and no need of re-initialization. PMID:23714300

  13. Improved QD-BRET conjugates for detection and imaging

    SciTech Connect

    Xing Yun; So, Min-kyung; Koh, Ai Leen; Sinclair, Robert; Rao Jianghong

    2008-08-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dot bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness and photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holding the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell tracking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies.

  14. Improved Diffusion Imaging through SNR-Enhancing Joint Reconstruction

    PubMed Central

    Haldar, Justin P.; Wedeen, Van J.; Nezamzadeh, Marzieh; Dai, Guangping; Weiner, Michael W.; Schuff, Norbert; Liang, Zhi-Pei

    2012-01-01

    Quantitative diffusion imaging is a powerful technique for the characterization of complex tissue microarchitecture. However, long acquisition times and limited signal-to-noise ratio (SNR) represent significant hurdles for many in vivo applications. This paper presents a new approach to reduce noise while largely maintaining resolution in diffusion weighted images, using a statistical reconstruction method that takes advantage of the high level of structural correlation observed in typical datasets. Compared to existing denoising methods, the proposed method performs reconstruction directly from the measured complex k-space data, allowing for Gaussian noise modeling and theoretical characterizations of the resolution and SNR of the reconstructed images. In addition, the proposed method is compatible with many different models of the diffusion signal (e.g., diffusion tensor modeling, q-space modeling, etc.). The joint reconstruction method can provide significant improvements in SNR relative to conventional reconstruction techniques, with a relatively minor corresponding loss in image resolution. Results are shown in the context of diffusion spectrum imaging tractography and diffusion tensor imaging, illustrating the potential of this SNR-enhancing joint reconstruction approach for a range of different diffusion imaging experiments. PMID:22392528

  15. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  16. Improved astrometric analysis of Pluto

    NASA Astrophysics Data System (ADS)

    Benedetti-Rossi, G.; Vieira-Martins, R.; Camargo, J.; Assafin, M.; Braga-Ribas, F.

    2014-07-01

    Pluto is the main representantive body of the transneptunian objects (TNOs). It presents an atmosphere and a satellite system with 5 known moons (three of them discovered less than 10 years ago). To learn about the physical and dynamical properties of this system, the most efficient method from the ground --- in spite of its rarity --- are stellar occultations, which showed an evident drift of about 20 milli-arcsecond (mas) in the declination when comparing to the ephemerides [1]. This fact was a great motivation to repeat the reductions and analyses for a large set of our observations. Around 6500 CCD images of Pluto were obtained over 174 nights using 3 telescopes at Pico dos Dias Observatory (OPD/LNA) in Brazil, covering a time span from 1995 to 2013, and another 12 nights in 2007 and 2009 using the ESO/MPG 2.2-m telescope equipped with the Wide Field Imager (WFI). The astrometric positions were reduced using PRAIA (Platform for Reduction of Astronomical Images Automatically) [2] using UCAC4 as the reference catalog. Also, they were corrected for differential chromatic refraction and later the (x,y) center of Pluto was determined from corrections to the measured photocenter, contaminated by Charon. Both corrections were obtained with the original procedures. The final astrometric positions were then compared to the planetary ephemerides (DE421+plu021) and occultation results. We obtained the mean values of 7 mas and 36 mas for the right ascension and declination, respectively, and the standard deviations of σ_α = 45 mas and σ_δ = 49 mas for the offsets in the sense of ''observed minus ephemerides position''. Moreover, we obtained the same behavior for the declination as obtained from stellar occultations, with a drift of around 100 mas since 2005. With the imminent arrival of the New Horizons spacecraft to this system (scheduled for July 2015), new ephemerides may be corrected so that they do not present systematic drifts near the time interval that contains

  17. Improving RLRN Image Splicing Detection with the Use of PCA and Kernel PCA

    PubMed Central

    Jalab, Hamid A.; Md Noor, Rafidah

    2014-01-01

    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images. PMID:25295304

  18. Improved image registration by sparse patch-based deformation estimation.

    PubMed

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang

    2015-01-15

    Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we

  19. Quantitative image analysis of synovial tissue.

    PubMed

    van der Hall, Pascal O; Kraan, Maarten C; Tak, Paul Peter

    2007-01-01

    Quantitative image analysis is a form of imaging that includes microscopic histological quantification, video microscopy, image analysis, and image processing. Hallmarks are the generation of reliable, reproducible, and efficient measurements via strict calibration and step-by-step control of the acquisition, storage and evaluation of images with dedicated hardware and software. Major advantages of quantitative image analysis over traditional techniques include sophisticated calibration systems, interaction, speed, and control of inter- and intraobserver variation. This results in a well controlled environment, which is essential for quality control and reproducibility, and helps to optimize sensitivity and specificity. To achieve this, an optimal quantitative image analysis system combines solid software engineering with easy interactivity with the operator. Moreover, the system also needs to be as transparent as possible in generating the data because a "black box design" will deliver uncontrollable results. In addition to these more general aspects, specifically for the analysis of synovial tissue the necessity of interactivity is highlighted by the added value of identification and quantification of information as present in areas such as the intimal lining layer, blood vessels, and lymphocyte aggregates. Speed is another important aspect of digital cytometry. Currently, rapidly increasing numbers of samples, together with accumulation of a variety of markers and detection techniques has made the use of traditional analysis techniques such as manual quantification and semi-quantitative analysis unpractical. It can be anticipated that the development of even more powerful computer systems with sophisticated software will further facilitate reliable analysis at high speed.

  20. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  1. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  2. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d+ = 0.38), a small-to-medium reduction in beauty ideal internalisation (d+ = -0.37), and a large reduction in social comparison tendencies (d+ = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  3. Conductive resins improve charging and resolution of acquired images in electron microscopic volume imaging

    PubMed Central

    Nguyen, Huy Bang; Thai, Truc Quynh; Saitoh, Sei; Wu, Bao; Saitoh, Yurika; Shimo, Satoshi; Fujitani, Hiroshi; Otobe, Hirohide; Ohno, Nobuhiko

    2016-01-01

    Recent advances in serial block-face imaging using scanning electron microscopy (SEM) have enabled the rapid and efficient acquisition of 3-dimensional (3D) ultrastructural information from a large volume of biological specimens including brain tissues. However, volume imaging under SEM is often hampered by sample charging, and typically requires specific sample preparation to reduce charging and increase image contrast. In the present study, we introduced carbon-based conductive resins for 3D analyses of subcellular ultrastructures, using serial block-face SEM (SBF-SEM) to image samples. Conductive resins were produced by adding the carbon black filler, Ketjen black, to resins commonly used for electron microscopic observations of biological specimens. Carbon black mostly localized around tissues and did not penetrate cells, whereas the conductive resins significantly reduced the charging of samples during SBF-SEM imaging. When serial images were acquired, embedding into the conductive resins improved the resolution of images by facilitating the successful cutting of samples in SBF-SEM. These results suggest that improving the conductivities of resins with a carbon black filler is a simple and useful option for reducing charging and enhancing the resolution of images obtained for volume imaging with SEM. PMID:27020327

  4. Fundamental performance improvement to dispersive spectrograph based imaging technologies

    NASA Astrophysics Data System (ADS)

    Meade, Jeff T.; Behr, Bradford B.; Cenko, Andrew T.; Christensen, Peter; Hajian, Arsen R.; Hendrikse, Jan; Sweeney, Frederic D.

    2011-03-01

    Dispersive-based spectrometers may be qualified by their spectral resolving power and their throughput efficiency. A device known as a virtual slit is able to improve the resolving power by factors of several with a minimal loss in throughput, thereby fundamentally improving the quality of the spectrometer. A virtual slit was built and incorporated into a low performing spectrometer (R ~ 300) and was shown to increase the performance without a significant loss in signal. The operation and description of virtual slits is also given. High-performance, lowlight, and high-speed imaging instruments based on a dispersive-type spectrometer see the greatest impact from a virtual slit. The impact of a virtual slit on spectral domain optical coherence tomography (SD-OCT) is shown to improve the imaging quality substantially.

  5. Fidelity Analysis of Sampled Imaging Systems

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.; Rahman, Zia-ur

    1999-01-01

    Many modeling, simulation and performance analysis studies of sampled imaging systems are inherently incomplete because they are conditioned on a discrete-input, discrete-output model that only accounts for blurring during image acquisition and additive noise. For those sampled imaging systems where the effects of digital image acquisition, digital filtering and reconstruction are significant, the modeling, simulation and performance analysis should be based on a more comprehensive continuous-input, discrete-processing, continuous-output end-to-end model. This more comprehensive model should properly account for the low-pass filtering effects of image acquisition prior to sampling, the potentially important noiselike effects of the aliasing caused by sampling, additive noise due to device electronics and quantization, the generally high-boost filtering effects of digital processing, and the low-pass filtering effects of image reconstruction. This model should not, however, be so complex as to preclude significant mathematical analysis, particularly the mean-square (fidelity) type of analysis so common in linear system theory. We demonstrate that, although the mathematics of such a model is more complex, the increase in complexity is not so great as to prevent a complete fidelity-metric analysis at both the component level and at the end-to-end system level: that is, computable mean-square-based fidelity metrics are developed by which both component-level and system-level performance can be quantified. In addition, we demonstrate that system performance can be assessed qualitatively by visualizing the output image as the sum of three component images, each of which relates to a corresponding fidelity metric. The cascaded, or filtered, component accounts for the end-to-end system filtering of image acquisition, digital processing, and image reconstruction; the random noise component accounts for additive random noise, modulated by digital processing and image

  6. Image guidance improves localization of sonographically occult colorectal liver metastases

    NASA Astrophysics Data System (ADS)

    Leung, Universe; Simpson, Amber L.; Adams, Lauryn B.; Jarnagin, William R.; Miga, Michael I.; Kingham, T. Peter

    2015-03-01

    Assessing the therapeutic benefit of surgical navigation systems is a challenging problem in image-guided surgery. The exact clinical indications for patients that may benefit from these systems is not always clear, particularly for abdominal surgery where image-guidance systems have failed to take hold in the same way as orthopedic and neurosurgical applications. We report interim analysis of a prospective clinical trial for localizing small colorectal liver metastases using the Explorer system (Path Finder Technologies, Nashville, TN). Colorectal liver metastases are small lesions that can be difficult to identify with conventional intraoperative ultrasound due to echogeneity changes in the liver as a result of chemotherapy and other preoperative treatments. Interim analysis with eighteen patients shows that 9 of 15 (60%) of these occult lesions could be detected with image guidance. Image guidance changed intraoperative management in 3 (17%) cases. These results suggest that image guidance is a promising tool for localization of small occult liver metastases and that the indications for image-guided surgery are expanding.

  7. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  8. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  9. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  10. Improving analysis of radiochromic films

    NASA Astrophysics Data System (ADS)

    Baptista Neto, A. T.; Meira-Belo, L. C.; Faria, L. O.

    2014-02-01

    An appropriate radiochromic film should respond uniformly throughout its surface after exposure to a uniform radiation field. The evaluation of radiochromic films may be carried out, for example, by measuring color intensities in film-based digitized images. Fluctuations in color intensity may be caused by many different factors such as optical structure of the film's active layer, defects in structure of the film, scratches and external agents, such as dust. The use of high spatial resolution during film scanning should also increase microscopic uniformity. Since the average is strongly influenced by extreme values, the use of other statistical tools, for which this problem becomes inconspicuous, optimizes the application of higher spatial resolution as well as reduces standard deviations. This paper compares the calibration curves of the XR-QA2 Gafchromic® radiochromic film based on three different methods: the average of all color intensity readings, the median of these same readings and the average of readings that fall between the first and third quartiles. Results indicate that a higher spatial resolution may be adopted whenever the calibration curve is based on tools less influenced by extreme values such as those generated by the factors mentioned above.

  11. Research on an Improved Medical Image Enhancement Algorithm Based on P-M Model.

    PubMed

    Dong, Beibei; Yang, Jingjing; Hao, Shangfu; Zhang, Xiao

    2015-01-01

    Image enhancement can improve the detail of the image and so as to achieve the purpose of the identification of the image. At present, the image enhancement is widely used in medical images, which can help doctor's diagnosis. IEABPM (Image Enhancement Algorithm Based on P-M Model) is one of the most common image enhancement algorithms. However, it may cause the lost of the texture details and other features. To solve the problems, this paper proposes an IIEABPM (Improved Image Enhancement Algorithm Based on P-M Model). Simulation demonstrates that IIEABPM can effectively solve the problems of IEABPM, and improve image clarity, image contrast, and image brightness. PMID:26628929

  12. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  13. Enhanced bone structural analysis through pQCT image preprocessing.

    PubMed

    Cervinka, T; Hyttinen, J; Sievanen, H

    2010-05-01

    Several factors, including preprocessing of the image, can affect the reliability of pQCT-measured bone traits, such as cortical area and trabecular density. Using repeated scans of four different liquid phantoms and repeated in vivo scans of distal tibiae from 25 subjects, the performance of two novel preprocessing methods, based on the down-sampling of grayscale intensity histogram and the statistical approximation of image data, was compared to 3 x 3 and 5 x 5 median filtering. According to phantom measurements, the signal to noise ratio in the raw pQCT images (XCT 3000) was low ( approximately 20dB) which posed a challenge for preprocessing. Concerning the cortical analysis, the reliability coefficient (R) was 67% for the raw image and increased to 94-97% after preprocessing without apparent preference for any method. Concerning the trabecular density, the R-values were already high ( approximately 99%) in the raw images leaving virtually no room for improvement. However, some coarse structural patterns could be seen in the preprocessed images in contrast to a disperse distribution of density levels in the raw image. In conclusion, preprocessing cannot suppress the high noise level to the extent that the analysis of mean trabecular density is essentially improved, whereas preprocessing can enhance cortical bone analysis and also facilitate coarse structural analyses of the trabecular region.

  14. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  15. Analysis of dynamic brain imaging data.

    PubMed Central

    Mitra, P P; Pesaran, B

    1999-01-01

    Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: "noise" characterization and suppression, and "signal" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources. PMID:9929474

  16. NIH Image to ImageJ: 25 years of image analysis.

    PubMed

    Schneider, Caroline A; Rasband, Wayne S; Eliceiri, Kevin W

    2012-07-01

    For the past 25 years NIH Image and ImageJ software have been pioneers as open tools for the analysis of scientific images. We discuss the origins, challenges and solutions of these two programs, and how their history can serve to advise and inform other software projects.

  17. Factor Analysis of the Image Correlation Matrix.

    ERIC Educational Resources Information Center

    Kaiser, Henry F.; Cerny, Barbara A.

    1979-01-01

    Whether to factor the image correlation matrix or to use a new model with an alpha factor analysis of it is mentioned, with particular reference to the determinacy problem. It is pointed out that the distribution of the images is sensibly multivariate normal, making for "better" factor analyses. (Author/CTM)

  18. Automatic SAR and optical images registration method based on improved SIFT

    NASA Astrophysics Data System (ADS)

    Yue, Chunyu; Jiang, Wanshou

    2014-10-01

    An automatic SAR and optical images registration method based on improved SIFT is proposed in this paper, which is a two-step strategy, from rough to accuracy. The geometry relation of images is first constructed by the geographic information, and images are arranged based on the elevation datum plane to eliminate rotation and resolution differences. Then SIFT features extracted by the dominant direction improved SIFT from two images are matched by SSIM as similar measure according to structure information of the SIFT feature. As rotation difference is eliminated in images of flat area after rough registration, the number of correct matches and correct matching rate can be increased by altering the feature orientation assignment. And then, parallax and angle restrictions are introduced to improve the matching performance by clustering analysis in the angle and parallax domains. Mapping the original matches to the parallax feature space and rotation feature space in sequence, which are established by the custom defined parallax parameters and rotation parameters respectively. Cluster analysis is applied in the parallax feature space and rotation feature space, and the relationship between cluster parameters and matching result is analysed. Owing to the clustering feature, correct matches are retained. Finally, the perspective transform parameters for the registration are obtained by RANSAC algorithm with removing the false matches simultaneously. Experiments show that the algorithm proposed in this paper is effective in the registration of SAR and optical images with large differences.

  19. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  20. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  1. Image Quality Improvement Performance Using the Synthetic Aperture Focusing Technique Data

    NASA Astrophysics Data System (ADS)

    Acevedo, P.; Durán, A.; Rubio, E.

    A wavelet application to improve image quality using the Synthetic Aperture Focusing Technique (SAFT) Data is presented. This Application is based on a wavelet package analysis applied to RF signals sensed by transducers in a scanning process. SAFT is performed for each level in the wavelet package decomposition tree. For image forming, an addition and delay algorithm with focusing on pixels has been implemented, and calculations have been carried out using the Matlab toolbox. This method has been applied to image forming for a point reflector simulated using the Field II toolbox and for a phantom constructed using nine acrylic reflectors. Obtained results show that axial resolution is improved and formed images have a better signal-noise ratio.

  2. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  3. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  4. Analysis of the relationship between the volumetric soil moisture content and the NDVI from high resolution multi-spectral images for definition of vineyard management zones to improve irrigation

    NASA Astrophysics Data System (ADS)

    Martínez-Casasnovas, J. A.; Ramos, M. C.

    2009-04-01

    As suggested by previous research in the field of precision viticulture, intra-field yield variability is dependent on the variation of soil properties, and in particular the soil moisture content. Since the mapping in detail of this soil property for precision viticulture applications is highly costly, the objective of the present research is to analyse its relationship with the normalised difference vegetation index from high resolution satellite images to the use it in the definition of vineyard zonal management. The final aim is to improve irrigation in commercial vineyard blocks for better management of inputs and to deliver a more homogeneous fruit to the winery. The study was carried out in a vineyard block located in Raimat (NE Spain, Costers del Segre Designation of Origin). This is a semi-arid area with continental Mediterranean climate and a total annual precipitation between 300-400 mm. The vineyard block (4.5 ha) is planted with Syrah vines in a 3x2 m pattern. The vines are irrigated by means of drips under a partial root drying schedule. Initially, the irrigation sectors had a quadrangular distribution, with a size of about 1 ha each. Yield is highly variable within the block, presenting a coefficient of variation of 24.9%. For the measurement of the soil moisture content a regular sampling grid of 30 x 40 m was defined. This represents a sample density of 8 samples ha-1. At the nodes of the grid, TDR (Time Domain Reflectometer) probe tubes were permanently installed up to the 80 cm or up to reaching a contrasting layer. Multi-temporal measures were taken at different depths (each 20 cm) between November 2006 and December 2007. For each date, a map of the variability of the profile soil moisture content was interpolated by means of geostatistical analysis: from the measured values at the grid points the experimental variograms were computed and modelled and global block kriging (10 m squared blocks) undertaken with a grid spacing of 3 m x 3 m. On the

  5. Improved Starting Materials for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An improved type of starting materials for the fabrication of silicon-based imaging integrated circuits that include back-illuminated photodetectors has been conceived, and a process for making these starting materials is undergoing development. These materials are intended to enable reductions in dark currents and increases in quantum efficiencies, relative to those of comparable imagers made from prior silicon-on-insulator (SOI) starting materials. Some background information is prerequisite to a meaningful description of the improved starting materials and process. A prior SOI starting material, depicted in the upper part the figure, includes: a) A device layer on the front side, typically between 2 and 20 m thick, made of p-doped silicon (that is, silicon lightly doped with an electron acceptor, which is typically boron); b) A buried oxide (BOX) layer (that is, a buried layer of oxidized silicon) between 0.2 and 0.5 m thick; and c) A silicon handle layer (also known as a handle wafer) on the back side, between about 600 and 650 m thick. After fabrication of the imager circuitry in and on the device layer, the handle wafer is etched away, the BOX layer acting as an etch stop. In subsequent operation of the imager, light enters from the back, through the BOX layer. The advantages of back illumination over front illumination have been discussed in prior NASA Tech Briefs articles.

  6. Improved background oriented schlieren imaging using laser speckle illumination

    NASA Astrophysics Data System (ADS)

    Meier, Alexander H.; Roesgen, Thomas

    2013-06-01

    A new variant of the background oriented schlieren technique is presented. Using a laser to generate a speckle reference pattern, several shortcomings of the conventional technique can be overcome. The arrangement decouples the achievable sensitivity from the placement constraint on the reference screen, facilitating the design of compact, high-sensitivity configurations. A new dual-pass imaging mode is introduced which further improves system performance and permits focusing on the target scene. Examples are presented that confirm the theoretical predictions.

  7. Combining image features, case descriptions and UMLS concepts to improve retrieval of medical images.

    PubMed

    Ruiz, Miguel E

    2006-01-01

    This paper evaluates a system, UBMedTIRS, for retrieval of medical images. The system uses a combination of image and text features as well as mapping of free text to UMLS concepts. UBMedTIRS combines three publicly available tools: a content-based image retrieval system (GIFT), a text retrieval system (SMART), and a tool for mapping free text to UMLS concepts (MetaMap). The system is evaluated using the ImageCLEFmed 2005 collection that contains approximately 50,000 medical images with associated text descriptions in English, French and German. Our experimental results indicate that the proposed approach yields significant improvements in retrieval performance. Our system performs 156% above the GIFT system and 42% above the text retrieval system.

  8. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-05-07

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging.

  9. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  10. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  11. On image analysis in fractography (Methodological Notes)

    NASA Astrophysics Data System (ADS)

    Shtremel', M. A.

    2015-10-01

    As other spheres of image analysis, fractography has no universal method for information convolution. An effective characteristic of an image is found by analyzing the essence and origin of every class of objects. As follows from the geometric definition of a fractal curve, its projection onto any straight line covers a certain segment many times; therefore, neither a time series (one-valued function of time) nor an image (one-valued function of plane) can be a fractal. For applications, multidimensional multiscale characteristics of an image are necessary. "Full" wavelet series break the law of conservation of information.

  12. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  13. Improving medical imaging report turnaround times: the role of technolgy.

    PubMed

    Marquez, Luis O; Stewart, Howard

    2005-01-01

    At Southern Ohio Medical Center (SOMC), the medical imaging department and the radiologists expressed a strong desire to improve workflow. The improved workflow was a major motivating factor toward implementing a new RIS and speech recognition technology. The need to monitor workflow in a real-time fashion and to evaluate productivity and resources necessitated that a new solution be found. A decision was made to roll out both the new RIS product and speech recognition to maximize the resources to interface and implement the new solution. Prior to implementation of the new RIS, the medical imaging department operated in a conventional electronic-order-entry to paper request manner. The paper request followed the study through exam completion to the radiologist. SOMC entered into a contract with its PACS vendor to participate in beta testing and clinical trials for a new RIS product for the US market. Backup plans were created in the event the product failed to function as planned--either during the beta testing period or during clinical trails. The last piece of the technology puzzle to improve report turnaround time was voice recognition technology. Speech recognition enhanced the RIS technology as soon as it was implemented. The results show that the project has been a success. The new RIS, combined with speech recognition and the PACS, makes for a very effective solution to patient, exam, and results management in the medical imaging department.

  14. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  15. Improvement of material decomposition and image quality in dual-energy radiography by reducing image noise

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-s.; Choi, S.; Lee, H.; Choi, S.; Jo, B. D.; Jeon, P.-H.; Kim, H.; Kim, D.; Kim, H.; Kim, H.-J.

    2016-08-01

    Although digital radiography has been widely used for screening human anatomical structures in clinical situations, it has several limitations due to anatomical overlapping. To resolve this problem, dual-energy imaging techniques, which provide a method for decomposing overlying anatomical structures, have been suggested as alternative imaging techniques. Previous studies have reported several dual-energy techniques, each resulting in different image qualities. In this study, we compared three dual-energy techniques: simple log subtraction (SLS), simple smoothing of a high-energy image (SSH), and anti-correlated noise reduction (ACNR) with respect to material thickness quantification and image quality. To evaluate dual-energy radiography, we conducted Monte Carlo simulation and experimental phantom studies. The Geant 4 Application for Tomographic Emission (GATE) v 6.0 and tungsten anode spectral model using interpolation polynomials (TASMIP) codes were used for simulation studies and digital radiography, and human chest phantoms were used for experimental studies. The results of the simulation study showed improved image contrast-to-noise ratio (CNR) and coefficient of variation (COV) values and bone thickness estimation accuracy by applying the ACNR and SSH methods. Furthermore, the chest phantom images showed better image quality with the SSH and ACNR methods compared to the SLS method. In particular, the bone texture characteristics were well-described by applying the SSH and ACNR methods. In conclusion, the SSH and ACNR methods improved the accuracy of material quantification and image quality in dual-energy radiography compared to SLS. Our results can contribute to better diagnostic capabilities of dual-energy images and accurate material quantification in various clinical situations.

  16. Improved LLS imaging performance in scattering-dominant waters

    NASA Astrophysics Data System (ADS)

    Dalgleish, Fraser R.; Caimi, Frank M.; Britton, Walter B.; Andren, Carl F.

    2009-05-01

    Experimental results from two alternate approaches to underwater imaging based around the well known Laser Line Scan (LLS) serial imaging technique are presented. Traditionally employing Continuous Wave (CW) laser excitation, LLS is known to improve achievable distance and image contrast in scattering-dominant waters by reducing both the backscatter and forward scatter levels reaching the optical receiver. This study involved designing and building prototype benchtop CW-LLS and pulsed-gated LLS imagers to perform a series of experiments in the Harbor Branch Oceanographic Institute (HBOI) full-scale laser imaging tank, under controlled scattering conditions using known particle suspensions. Employing fixed laser-receiver separation (24.3cm) in a bi-static optical geometry, the CW-LLS was capable of producing crisp, high contrast images at beyond 4 beam attenuation lengths at 7 meters stand-off distance. Beyond this stand-off distance or at greater turbidity, the imaging performance began to be limited mainly by multiple backscatter and shot noise generated in the receiver, eventually reaching a complete contrast limit at around 6 beam attenuation lengths. Using identical optical geometry as the CW-LLS, a pulsed-gated laser line scan (PG-LLS) system was configured and tested, demonstrating a significant reduction in the backscatter reaching the receiver. When compared with the CW-LLS at 7 meters stand-off distance, the PG-LLS did not become limited due to multiple backscatter, instead reaching a limit (believed to be primarily due to forward-scattered light overcoming the attenuated direct target signal) beyond 7 beam attenuation lengths. This result demonstrates the potential for a greater operational limit as compared to previous CW-LLS configuration.

  17. Optimal grid point selection for improved nonrigid medical image registration

    NASA Astrophysics Data System (ADS)

    Fookes, Clinton; Maeder, Anthony

    2004-05-01

    Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between medical images acquired from different individuals or atlases, among others. This type of registration defines a deformation field that gives a translation or mapping for every pixel in the image. One popular local approach for estimating this deformation field, known as block matching, is where a grid of control points are defined on an image and are each taken as the centre of a small window. These windows are then translated in the second image to maximise a local similarity criterion. This generates two corresponding sets of control points for the two images, yielding a sparse deformation field. This sparse field can then be propagated to the entire image using well known methods such as the thin-plate spline warp or simple Gaussian convolution. Previous block matching procedures all utilise uniformly distributed grid points. This results in the generation of a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. That is, results are better in regions of high information when compared to regions of low information. Consequently, this paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo (RJMCMC) statistical procedure to optimally select grid points of interest. These grid points have a greater concentration in regions of high information and a lower concentration in regions of small information. Results show that non-rigid registration can by improved by using optimally selected grid points of interest.

  18. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  19. Malware Analysis Using Visualized Image Matrices

    PubMed Central

    Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  20. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  1. Body image change and improved eating self-regulation in a weight management intervention in women

    PubMed Central

    2011-01-01

    Background Successful weight management involves the regulation of eating behavior. However, the specific mechanisms underlying its successful regulation remain unclear. This study examined one potential mechanism by testing a model in which improved body image mediated the effects of obesity treatment on eating self-regulation. Further, this study explored the role of different body image components. Methods Participants were 239 overweight women (age: 37.6 ± 7.1 yr; BMI: 31.5 ± 4.1 kg/m2) engaged in a 12-month behavioral weight management program, which included a body image module. Self-reported measures were used to assess evaluative and investment body image, and eating behavior. Measurements occurred at baseline and at 12 months. Baseline-residualized scores were calculated to report change in the dependent variables. The model was tested using partial least squares analysis. Results The model explained 18-44% of the variance in the dependent variables. Treatment significantly improved both body image components, particularly by decreasing its investment component (f2 = .32 vs. f2 = .22). Eating behavior was positively predicted by investment body image change (p < .001) and to a lesser extent by evaluative body image (p < .05). Treatment had significant effects on 12-month eating behavior change, which were fully mediated by investment and partially mediated by evaluative body image (effect ratios: .68 and .22, respectively). Conclusions Results suggest that improving body image, particularly by reducing its salience in one's personal life, might play a role in enhancing eating self-regulation during weight control. Accordingly, future weight loss interventions could benefit from proactively addressing body image-related issues as part of their protocols. PMID:21767360

  2. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  3. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  4. A Grid-Based Image Archival and Analysis System

    PubMed Central

    Hastings, Shannon; Oster, Scott; Langella, Stephen; Kurc, Tahsin M.; Pan, Tony; Catalyurek, Umit V.; Saltz, Joel H.

    2005-01-01

    Here the authors present a Grid-aware middleware system, called GridPACS, that enables management and analysis of images in a massive scale, leveraging distributed software components coupled with interconnected computation and storage platforms. The need for this infrastructure is driven by the increasing biomedical role played by complex datasets obtained through a variety of imaging modalities. The GridPACS architecture is designed to support a wide range of biomedical applications encountered in basic and clinical research, which make use of large collections of images. Imaging data yield a wealth of metabolic and anatomic information from macroscopic (e.g., radiology) to microscopic (e.g., digitized slides) scale. Whereas this information can significantly improve understanding of disease pathophysiology as well as the noninvasive diagnosis of disease in patients, the need to process, analyze, and store large amounts of image data presents a great challenge. PMID:15684129

  5. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  6. Poster — Thur Eve — 15: Improvements in the stability of the tomotherapy imaging beam

    SciTech Connect

    Belec, J

    2014-08-15

    Use of helical TomoTherapy based MVCT imaging for adaptive planning requires the image values (HU) to remain stable over the course of treatment. In the past, the image value stability was suboptimal, which required frequent change to the image value to density calibration curve to avoid dose errors on the order of 2–4%. The stability of the image values at our center was recently improved by stabilizing the dose rate of the machine (dose control servo) and performing daily MVCT calibration corrections. In this work, we quantify the stability of the image values over treatment time by comparing patient treatment image density derived using MVCT and KVCT. The analysis includes 1) MVCT - KVCT density difference histogram, 2) MVCT vs KVCT density spectrum, 3) multiple average profile density comparison and 4) density difference in homogeneous locations. Over two months, the imaging beam stability was compromised several times due to a combination of target wobbling, spectral calibration, target change and magnetron issues. The stability of the image values were analyzed over the same period. Results show that the impact on the patient dose calculation is 0.7% +− 0.6%.

  7. Improved molecular imaging contrast agent for detection of human thrombus.

    PubMed

    Winter, Patrick M; Caruthers, Shelton D; Yu, Xin; Song, Sheng-Kwei; Chen, Junjie; Miller, Brad; Bulte, Jeff W M; Robertson, J David; Gaffney, Patrick J; Wickline, Samuel A; Lanza, Gregory M

    2003-08-01

    Molecular imaging of microthrombus within fissures of unstable atherosclerotic plaques requires sensitive detection with a thrombus-specific agent. Effective molecular imaging has been previously demonstrated with fibrin-targeted Gd-DTPA-bis-oleate (BOA) nanoparticles. In this study, the relaxivity of an improved fibrin-targeted paramagnetic formulation, Gd-DTPA-phosphatidylethanolamine (PE), was compared with Gd-DTPA-BOA at 0.05-4.7 T. Ion- and particle-based r(1) relaxivities (1.5 T) for Gd-DTPA-PE (33.7 (s*mM)(-1) and 2.48 x 10(6) (s*mM)(-1), respectively) were about twofold higher than for Gd-DTPA-BOA, perhaps due to faster water exchange with surface gadolinium. Gd-DTPA-PE nanoparticles bound to thrombus surfaces via anti-fibrin antibodies (1H10) induced 72% +/- 5% higher change in R(1) values at 1.5 T (deltaR(1) = 0.77 +/- 0.02 1/s) relative to Gd-DTPA-BOA (deltaR(1) = 0.45 +/- 0.02 1/s). These studies demonstrate marked improvement in a fibrin-specific molecular imaging agent that might allow sensitive, early detection of vascular microthrombi, the antecedent to stroke and heart attack.

  8. EPA guidance on improving the image of psychiatry.

    PubMed

    Möller-Leimkühler, A M; Möller, H-J; Maier, W; Gaebel, W; Falkai, P

    2016-03-01

    This paper explores causes, explanations and consequences of the negative image of psychiatry and develops recommendations for improvement. It is primarily based on a WPA guidance paper on how to combat the stigmatization of psychiatry and psychiatrists and a Medline search on related publications since 2010. Furthermore, focussing on potential causes and explanations, the authors performed a selective literature search regarding additional image-related issues such as mental health literacy and diagnostic and treatment issues. Underestimation of psychiatry results from both unjustified prejudices of the general public, mass media and healthcare professionals and psychiatry's own unfavourable coping with external and internal concerns. Issues related to unjustified devaluation of psychiatry include overestimation of coercion, associative stigma, lack of public knowledge, need to simplify complex mental issues, problem of the continuum between normality and psychopathology, competition with medical and non-medical disciplines and psychopharmacological treatment. Issues related to psychiatry's own contribution to being underestimated include lack of a clear professional identity, lack of biomarkers supporting clinical diagnoses, limited consensus about best treatment options, lack of collaboration with other medical disciplines and low recruitment rates among medical students. Recommendations are proposed for creating and representing a positive self-concept with different components. The negative image of psychiatry is not only due to unfavourable communication with the media, but is basically a problem of self-conceptualization. Much can be improved. However, psychiatry will remain a profession with an exceptional position among the medical disciplines, which should be seen as its specific strength. PMID:26874959

  9. Repeated-Measures Analysis of Image Data

    NASA Technical Reports Server (NTRS)

    Newton, H. J.

    1983-01-01

    It is suggested that using a modified analysis of variance procedure on data sampled systematically from a rectangular array of image data can provide a measure of homogeneity of means over that array in single directions and how variation in perpendicular directions interact. The modification of analysis of variance required to account for spatial correlation is described theoretically and numerically on simulated data.

  10. Conducting a SWOT Analysis for Program Improvement

    ERIC Educational Resources Information Center

    Orr, Betsy

    2013-01-01

    A SWOT (strengths, weaknesses, opportunities, and threats) analysis of a teacher education program, or any program, can be the driving force for implementing change. A SWOT analysis is used to assist faculty in initiating meaningful change in a program and to use the data for program improvement. This tool is useful in any undergraduate or degree…

  11. Improving reliability of live/dead cell counting through automated image mosaicing.

    PubMed

    Piccinini, Filippo; Tesei, Anna; Paganelli, Giulia; Zoli, Wainer; Bevilacqua, Alessandro

    2014-12-01

    Cell counting is one of the basic needs of most biological experiments. Numerous methods and systems have been studied to improve the reliability of counting. However, at present, manual cell counting performed with a hemocytometer still represents the gold standard, despite several problems limiting reproducibility and repeatability of the counts and, at the end, jeopardizing their reliability in general. We present our own approach based on image processing techniques to improve counting reliability. It works in two stages: first building a high-resolution image of the hemocytometer's grid, then counting the live and dead cells by tagging the image with flags of different colours. In particular, we introduce GridMos (http://sourceforge.net/p/gridmos), a fully-automated mosaicing method to obtain a mosaic representing the whole hemocytometer's grid. In addition to offering more significant statistics, the mosaic "freezes" the culture status, thus permitting analysis by more than one operator. Finally, the mosaic achieved can thus be tagged by using an image editor, thus markedly improving counting reliability. The experiments performed confirm the improvements brought about by the proposed counting approach in terms of both reproducibility and repeatability, also suggesting the use of a mosaic of an entire hemocytometer's grid, then labelled trough an image editor, as the best likely candidate for the new gold standard method in cell counting.

  12. Improving image classification in a complex wetland ecosystem through image fusion techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit; Sinha, Priyakant; Taylor, Subhashni

    2014-01-01

    The aim of this study was to evaluate the impact of image fusion techniques on vegetation classification accuracies in a complex wetland system. Fusion of panchromatic (PAN) and multispectral (MS) Quickbird satellite imagery was undertaken using four image fusion techniques: Brovey, hue-saturation-value (HSV), principal components (PC), and Gram-Schmidt (GS) spectral sharpening. These four fusion techniques were compared in terms of their mapping accuracy to a normal MS image using maximum-likelihood classification (MLC) and support vector machine (SVM) methods. Gram-Schmidt fusion technique yielded the highest overall accuracy and kappa value with both MLC (67.5% and 0.63, respectively) and SVM methods (73.3% and 0.68, respectively). This compared favorably with the accuracies achieved using the MS image. Overall, improvements of 4.1%, 3.6%, 5.8%, 5.4%, and 7.2% in overall accuracies were obtained in case of SVM over MLC for Brovey, HSV, GS, PC, and MS images, respectively. Visual and statistical analyses of the fused images showed that the Gram-Schmidt spectral sharpening technique preserved spectral quality much better than the principal component, Brovey, and HSV fused images. Other factors, such as the growth stage of species and the presence of extensive background water in many parts of the study area, had an impact on classification accuracies.

  13. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  14. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  15. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  16. Image Chain Analysis For Digital Image Rectification System

    NASA Astrophysics Data System (ADS)

    Arguello, Roger J.

    1981-07-01

    An image chain analysis, utilizing a comprehensive computer program, has been gen-erated for the key elements of a digital image rectification system. System block dia-grams and analyses for three system configurations employing film scanner input have been formulated with a parametric specification of pertinent element modulation transfer functions and input film scene spectra. The major elements of the system for this analy-sis include a high-resolution, high-speed charge-coupled device film scanner, three candidate digital resampling option algorithms (i.e., nearest neighbor, bilinear inter-polation and cubic convolution methods), and two candidate printer reconstructor implemen-tations (solid-state light-emitting diode printer and laser beam recorder). Suitable metrics for the digital rectification system, incorporating the effects of interpolation and resolution error, were established, and the image chain analysis program was used to perform a quantitative comparison of the three resampling options with the two candi-date printer reconstructor implementations. The nearest neighbor digital resampling function is found to be a good compromise choice when cascaded with either a light-emit-ting diode printer or laser beam recorder. The resulting composite intensity point spread functions, including resampling, and both types of reconstruction are bilinear and quadratic, respectively.

  17. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  18. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  19. Image-Guided Radiation Therapy: the potential for imaging science research to improve cancer treatment outcomes

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey

    2008-03-01

    The role of medical imaging in the planning and delivery of radiation therapy (RT) is rapidly expanding. This is being driven by two developments: Image-guided radiation therapy (IGRT) and biological image-based planning (BIBP). IGRT is the systematic use of serial treatment-position imaging to improve geometric targeting accuracy and/or to refine target definition. The enabling technology is the integration of high-performance three-dimensional (3D) imaging systems, e.g., onboard kilovoltage x-ray cone-beam CT, into RT delivery systems. IGRT seeks to adapt the patient's treatment to weekly, daily, or even real-time changes in organ position and shape. BIBP uses non-anatomic imaging (PET, MR spectroscopy, functional MR, etc.) to visualize abnormal tissue biology (angiogenesis, proliferation, metabolism, etc.) leading to more accurate clinical target volume (CTV) delineation and more accurate targeting of high doses to tissue with the highest tumor cell burden. In both cases, the goal is to reduce both systematic and random tissue localization errors (2-5 mm for conventional RT) conformality so that planning target volume (PTV) margins (varying from 8 to 20 mm in conventional RT) used to ensure target volume coverage in the presence of geometric error, can be substantially reduced. Reduced PTV expansion allows more conformal treatment of the target volume, increased avoidance of normal tissue and potential for safe delivery of more aggressive dose regimens. This presentation will focus on the imaging science challenges posed by the IGRT and BIBP. These issues include: Development of robust and accurate nonrigid image-registration (NIR) tools: Extracting locally nonlinear mappings that relate, voxel-by-voxel, one 3D anatomic representation of the patient to differently deformed anatomies acquired at different time points, is essential if IGRT is to move beyond simple translational treatment plan adaptations. NIR is needed to map segmented and labeled anatomy from the

  20. Cancer detection by quantitative fluorescence image analysis.

    PubMed

    Parry, W L; Hemstreet, G P

    1988-02-01

    Quantitative fluorescence image analysis is a rapidly evolving biophysical cytochemical technology with the potential for multiple clinical and basic research applications. We report the application of this technique for bladder cancer detection and discuss its potential usefulness as an adjunct to methods used currently by urologists for the diagnosis and management of bladder cancer. Quantitative fluorescence image analysis is a cytological method that incorporates 2 diagnostic techniques, quantitation of nuclear deoxyribonucleic acid and morphometric analysis, in a single semiautomated system to facilitate the identification of rare events, that is individual cancer cells. When compared to routine cytopathology for detection of bladder cancer in symptomatic patients, quantitative fluorescence image analysis demonstrated greater sensitivity (76 versus 33 per cent) for the detection of low grade transitional cell carcinoma. The specificity of quantitative fluorescence image analysis in a small control group was 94 per cent and with the manual method for quantitation of absolute nuclear fluorescence intensity in the screening of high risk asymptomatic subjects the specificity was 96.7 per cent. The more familiar flow cytometry is another fluorescence technique for measurement of nuclear deoxyribonucleic acid. However, rather than identifying individual cancer cells, flow cytometry identifies cellular pattern distributions, that is the ratio of normal to abnormal cells. Numerous studies by others have shown that flow cytometry is a sensitive method to monitor patients with diagnosed urological disease. Based upon results in separate quantitative fluorescence image analysis and flow cytometry studies, it appears that these 2 fluorescence techniques may be complementary tools for urological screening, diagnosis and management, and that they also may be useful separately or in combination to elucidate the oncogenic process, determine the biological potential of tumors

  1. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  2. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  3. Improved proton computed tomography by dual modality image reconstruction

    SciTech Connect

    Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  4. Advanced automated char image analysis techniques

    SciTech Connect

    Tao Wu; Edward Lester; Michael Cloke

    2006-05-15

    Char morphology is an important characteristic when attempting to understand coal behavior and coal burnout. In this study, an augmented algorithm has been proposed to identify char types using image analysis. On the basis of a series of image processing steps, a char image is singled out from the whole image, which then allows the important major features of the char particle to be measured, including size, porosity, and wall thickness. The techniques for automated char image analysis have been tested against char images taken from ICCP Char Atlas as well as actual char particles derived from pyrolyzed char samples. Thirty different chars were prepared in a drop tube furnace operating at 1300{sup o}C, 1% oxygen, and 100 ms from 15 different world coals sieved into two size fractions (53-75 and 106-125 {mu}m). The results from this automated technique are comparable with those from manual analysis, and the additional detail from the automated sytem has potential use in applications such as combustion modeling systems. Obtaining highly detailed char information with automated methods has traditionally been hampered by the difficulty of automatic recognition of individual char particles. 20 refs., 10 figs., 3 tabs.

  5. The electronic image stabilization technology research based on improved optical-flow motion vector estimation

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng

    2016-01-01

    The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.

  6. Improved Subspace Estimation for Low-Rank Model-Based Accelerated Cardiac Imaging

    PubMed Central

    Hitchens, T. Kevin; Wu, Yijen L.; Ho, Chien; Liang, Zhi-Pei

    2014-01-01

    Sparse sampling methods have emerged as effective tools to accelerate cardiac magnetic resonance imaging (MRI). Low-rank model-based cardiac imaging uses a pre-determined temporal subspace for image reconstruction from highly under-sampled (k, t)-space data and has been demonstrated effective for high-speed cardiac MRI. The accuracy of the temporal subspace is a key factor in these methods, yet little work has been published on data acquisition strategies to improve subspace estimation. This paper investigates the use of non-Cartesian k-space trajectories to replace the Cartesian trajectories which are omnipresent but are highly sensitive to readout direction. We also propose “self-navigated” pulse sequences which collect both navigator data (for determining the temporal subspace) and imaging data after every RF pulse, allowing for even greater acceleration. We investigate subspace estimation strategies through analysis of phantom images and demonstrate in vivo cardiac imaging in rats and mice without the use of ECG or respiratory gating. The proposed methods achieved 3-D imaging of wall motion, first-pass myocardial perfusion, and late gadolinium enhancement in rats at 74 frames per second (fps), as well as 2-D imaging of wall motion in mice at 97 fps. PMID:24801352

  7. A pairwise image analysis with sparse decomposition

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-02-01

    This paper aims to detect the evolution between two images representing the same scene. The evolution detection problem has many practical applications, especially in medical images. Indeed, the concept of a patient "file" implies the joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The research presented in this paper is carried out within the application context of the development of computer assisted diagnosis (CAD) applied to mammograms. It is performed on already registered pair of images. As the registration is never perfect, we must develop a comparison method sufficiently adapted to detect real small differences between comparable tissues. In many applications, the assessment of similarity used during the registration step is also used for the interpretation step that yields to prompt suspicious regions. In our case registration is assumed to match the spatial coordinates of similar anatomical elements. In this paper, in order to process the medical images at tissue level, the image representation is based on elementary patterns, therefore seeking patterns, not pixels. Besides, as the studied images have low entropy, the decomposed signal is expressed in a parsimonious way. Parsimonious representations are known to help extract the significant structures of a signal, and generate a compact version of the data. This change of representation should allow us to compare the studied images in a short time, thanks to the low weight of the images thus represented, while maintaining a good representativeness. The good precision of our results show the approach efficiency.

  8. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  9. Automated eXpert Spectral Image Analysis

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limtedmore » number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X

  10. Adaptive downsampling to improve image compression at low bit rates.

    PubMed

    Lin, Weisi; Dong, Li

    2006-09-01

    At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions/ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates.

  11. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B.

    2015-02-01

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.

  12. Applications Of Binary Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Tropf, H.; Enderle, E.; Kammerer, H. P.

    1983-10-01

    After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.

  13. Microscopical image analysis: problems and approaches.

    PubMed

    Bradbury, S

    1979-03-01

    This article reviews some of the problems which have been encountered in the application of automatic image analysis to problems in biology. Some of the questions involved in the actual formulation of such a problem for this approach are considered as well as the difficulties in the analysis due to lack of specific constrast in the image and to its complexity. Various practical methods which have been successful in overcoming these problems are outlined, and the question of the desirability of an opto-manual or semi-automatic system as opposed to a fully automatic version is considered.

  14. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  15. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results

  16. Multispectral Imager With Improved Filter Wheel and Optics

    NASA Technical Reports Server (NTRS)

    Bremer, James C.

    2007-01-01

    Figure 1 schematically depicts an improved multispectral imaging system of the type that utilizes a filter wheel that contains multiple discrete narrow-band-pass filters and that is rotated at a constant high speed to acquire images in rapid succession in the corresponding spectral bands. The improvement, relative to prior systems of this type, consists of the measures taken to prevent the exposure of a focal-plane array (FPA) of photodetectors to light in more than one spectral band at any given time and to prevent exposure of the array to any light during readout. In prior systems, these measures have included, variously the use of mechanical shutters or the incorporation of wide opaque sectors (equivalent to mechanical shutters) into filter wheels. These measures introduce substantial dead times into each operating cycle intervals during which image information cannot be collected and thus incoming light is wasted. In contrast, the present improved design does not involve shutters or wide opaque sectors, and it reduces dead times substantially. The improved multispectral imaging system is preceded by an afocal telescope and includes a filter wheel positioned so that its rotation brings each filter, in its turn, into the exit pupil of the telescope. The filter wheel contains an even number of narrow-band-pass filters separated by narrow, spoke-like opaque sectors. The geometric width of each filter exceeds the cross-sectional width of the light beam coming out of the telescope. The light transmitted by the sequence of narrow-band filters is incident on a dichroic beam splitter that reflects in a broad shorter-wavelength spectral band that contains half of the narrow bands and transmits in a broad longer-wavelength spectral band that contains the other half of the narrow spectral bands. The filters are arranged on the wheel so that if the pass band of a given filter is in the reflection band of the dichroic beam splitter, then the pass band of the adjacent filter

  17. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  18. Note: An improved 3D imaging system for electron-electron coincidence measurements

    NASA Astrophysics Data System (ADS)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-01

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  19. Creating the "desired mindset": Philip Morris's efforts to improve its corporate image among women.

    PubMed

    McDaniel, Patricia A; Malone, Ruth E

    2009-01-01

    Through analysis of tobacco company documents, we explored how and why Philip Morris sought to enhance its corporate image among American women. Philip Morris regarded women as an influential political group. To improve its image among women, while keeping tobacco off their organizational agendas, the company sponsored women's groups and programs. It also sought to appeal to women it defined as "active moms" by advertising its commitment to domestic violence victims. It was more successful in securing women's organizations as allies than active moms. Increasing tobacco's visibility as a global women's health issue may require addressing industry influence.

  20. Creating the "desired mindset": Philip Morris's efforts to improve its corporate image among women.

    PubMed

    McDaniel, Patricia A; Malone, Ruth E

    2009-01-01

    Through analysis of tobacco company documents, we explored how and why Philip Morris sought to enhance its corporate image among American women. Philip Morris regarded women as an influential political group. To improve its image among women, while keeping tobacco off their organizational agendas, the company sponsored women's groups and programs. It also sought to appeal to women it defined as "active moms" by advertising its commitment to domestic violence victims. It was more successful in securing women's organizations as allies than active moms. Increasing tobacco's visibility as a global women's health issue may require addressing industry influence. PMID:19851947

  1. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  2. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  3. Improving transient analysis technology for aircraft structures

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Chargin, Mladen

    1989-01-01

    Aircraft dynamic analyses are demanding of computer simulation capabilities. The modeling complexities of semi-monocoque construction, irregular geometry, high-performance materials, and high-accuracy analysis are present. At issue are the safety of the passengers and the integrity of the structure for a wide variety of flight-operating and emergency conditions. The technology which supports engineering of aircraft structures using computer simulation is examined. Available computer support is briefly described and improvement of accuracy and efficiency are recommended. Improved accuracy of simulation will lead to a more economical structure. Improved efficiency will result in lowering development time and expense.

  4. Improved SOT (Hinode mission) high resolution solar imaging observations

    NASA Astrophysics Data System (ADS)

    Goodarzi, H.; Koutchmy, S.; Adjabshirizadeh, A.

    2015-08-01

    We consider the best today available observations of the Sun free of turbulent Earth atmospheric effects, taken with the Solar Optical Telescope (SOT) onboard the Hinode spacecraft. Both the instrumental smearing and the observed stray light are analyzed in order to improve the resolution. The Point Spread Function (PSF) corresponding to the blue continuum Broadband Filter Imager (BFI) near 450 nm is deduced by analyzing (i) the limb of the Sun and (ii) images taken during the transit of the planet Venus in 2012. A combination of Gaussian and Lorentzian functions is selected to construct a PSF in order to remove both smearing due to the instrumental diffraction effects (PSF core) and the large-angle stray light due to the spiders and central obscuration (wings of the PSF) that are responsible for the parasitic stray light. A Max-likelihood deconvolution procedure based on an optimum number of iterations is discussed. It is applied to several solar field images, including the granulation near the limb. The normal non-magnetic granulation is compared to the abnormal granulation which we call magnetic. A new feature appearing for the first time at the extreme- limb of the disk (the last 100 km) is discussed in the context of the definition of the solar edge and of the solar diameter. A single sunspot is considered in order to illustrate how effectively the restoration works on the sunspot core. A set of 125 consecutive deconvolved images is assembled in a 45 min long movie illustrating the complexity of the dynamical behavior inside and around the sunspot.

  5. Methods of Improving a Digital Image Having White Zones

    NASA Technical Reports Server (NTRS)

    Wodell, Glenn A. (Inventor); Jobson, Daniel J. (Inventor); Rahman, Zia-Ur (Inventor)

    2005-01-01

    The present invention is a method of processing a digital image that is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I,(x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with SIGMA (sup N)(sub n=1)W(sub n)(log I(sub i)(x,y)-log[I(sub i)(x,y)*F(sub n)(x,y)]), i = 1,...,S where W(sub n) is a weighting factor, "*" is the convolution operator and S is the total number of unique spectral bands. For each n, the function F(sub n)(x,y) is a unique surround function applied to each position (x,y) and N is the total number of unique surround functions. Each unique surround function is scaled to improve some aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band of the image is then filtered with a filter function to generate a filtered intensity value R(sub i)(x,y). To Prevent graying of white zones in the image, the maximum of the original intensity value I(sub i)(x,y) and filtered intensity value R(sub i)(x,y) is selected for display.

  6. Combining multispectral images and selected textural features from high-resolution images to improve discrimination of forest canopies

    NASA Astrophysics Data System (ADS)

    Ruiz, Luis A.; Inan, Igor; Baridon, Juan E.; Lanfranco, Jorge W.

    1998-12-01

    Discrimination of vegetation canopies for production of forestry and land use thematic cartography from multispectral satellite images requires high spectral and spatial resolutions, usually not available in this type of images. A methodology is proposed to improve a vegetation oriented classification from a Landsat TM image by adding texture information obtained from panchromatic aerial photographs. Multispectral classification was used to create a mask of the forested areas that was applied over the aerial mosaic composition. Further vegetation classes were defined based on textural differences, and eight texture features derived from the gray level co-occurrence matrix, three textural energy indicators and a factor of edgeness were tested. A selection of optimal features and textural parameters such as number of gray levels, window size and distance between pixels was performed using principal components and stepwise discriminant analysis techniques with a set of representative samples from each class. After a texture segmentation of panchromatic aerial imagery using optimal parameters and features was completed, a post-classification process based on morphological operations was applied to avoid the neighboring effect generated by the texture analysis. Overall accuracy in the identification of texture classes using the four best feathers was 86.6%, while the 88% of accuracy was achieved in the classification of the complete image. This method is useful for discrimination of certain vegetation classes with low spectral separability and arranged in small forest units, increasing the classification detail in those areas of particular interest.

  7. Analysis of an interferometric Stokes imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Murali, Sukumar

    Estimation of Stokes vector components from an interferometric fringe encoded image is a novel way of measuring the State Of Polarization (SOP) distribution across a scene. Imaging polarimeters employing interferometric techniques encode SOP in- formation across a scene in a single image in the form of intensity fringes. The lack of moving parts and use of a single image eliminates the problems of conventional polarimetry - vibration, spurious signal generation due to artifacts, beam wander, and need for registration routines. However, interferometric polarimeters are limited by narrow bandpass and short exposure time operations which decrease the Signal to Noise Ratio (SNR) defined as the ratio of the mean photon count to the standard deviation in the detected image. A simulation environment for designing an Interferometric Stokes Imaging polarimeter (ISIP) and a detector with noise effects is created and presented. Users of this environment are capable of imaging an object with defined SOP through an ISIP onto a detector producing a digitized image output. The simulation also includes bandpass imaging capabilities, control of detector noise, and object brightness levels. The Stokes images are estimated from a fringe encoded image of a scene by means of a reconstructor algorithm. A spatial domain methodology involving the idea of a unit cell and slide approach is applied to the reconstructor model developed using Mueller calculus. The validation of this methodology and effectiveness compared to a discrete approach is demonstrated with suitable examples. The pixel size required to sample the fringes and minimum unit cell size required for reconstruction are investigated using condition numbers. The importance of the PSF of fore-optics (telescope) used in imaging the object is investigated and analyzed using a point source imaging example and a Nyquist criteria is presented. Reconstruction of fringe modulated images in the presence of noise involves choosing an

  8. Improving depth resolution in digital breast tomosynthesis by iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Roth, Erin G.; Kraemer, David N.; Sidky, Emil Y.; Reiser, Ingrid S.; Pan, Xiaochuan

    2015-03-01

    Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.

  9. Improving sensitivity in nonlinear Raman microspectroscopy imaging and sensing

    PubMed Central

    Arora, Rajan; Petrov, Georgi I.; Liu, Jian; Yakovlev, Vladislav V.

    2011-01-01

    Nonlinear Raman microspectroscopy based on a broadband coherent anti-Stokes Raman scattering is an emerging technique for noninvasive, chemically specific, microscopic analysis of tissues and large population of cells and particles. The sensitivity of this imaging is a critical aspect of a number of the proposed biomedical application. It is shown that the incident laser power is the major parameter controlling this sensitivity. By careful optimizing the laser system, the high-quality vibrational spectra acquisition at the multi-kHz rate becomes feasible. PMID:21361677

  10. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  11. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions. PMID:15521492

  12. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  13. Principal Components Analysis In Medical Imaging

    NASA Astrophysics Data System (ADS)

    Weaver, J. B.; Huddleston, A. L.

    1986-06-01

    Principal components analysis, PCA, is basically a data reduction technique. PCA has been used in several problems in diagnostic radiology: processing radioisotope brain scans (Ref.1), automatic alignment of radionuclide images (Ref. 2), processing MRI images (Ref. 3,4), analyzing first-pass cardiac studies (Ref. 5) correcting for attenuation in bone mineral measurements (Ref. 6) and in dual energy x-ray imaging (Ref. 6,7). This paper will progress as follows; a brief introduction to the mathematics of PCA will be followed by two brief examples of how PCA has been used in the literature. Finally my own experience with PCA in dual-energy x-ray imaging will be given.

  14. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  15. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  16. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  17. Novel driver method to improve ordinary CCD frame rate for high-speed imaging diagnosis

    NASA Astrophysics Data System (ADS)

    Luo, Tong-Ding; Li, Bin-Kang; Yang, Shao-Hua; Guo, Ming-An; Yan, Ming

    2016-06-01

    The use of ordinary Charge-coupled-Device (CCD) imagers for the analysis of fast physical phenomenon is restricted because of the low-speed performance resulting from their long output times. Even though the form of Intensified-CCD (ICCD), coupled with a gated image intensifier, has extended their use for high speed imaging, the deficiency remains to be solved that ICDD could record only one image in a single shot. This paper presents a novel driver method designed to significantly improve the ordinary interline CCD burst frame rate for high-speed photography. This method is based on the use of vertical registers as storage, so that a small number of additional frames comprised of reduced-spatial-resolution images obtained via a specific sampling operation can be buffered. Hence, the interval time of the received series of images is related to the exposure and vertical transfer times only and, thus, the burst frame rate can be increased significantly. A prototype camera based on this method is designed as part of this study, exhibiting a burst rate of up to 250,000 frames per second (fps) and a capacity to record three continuous images. This device exhibits a speed enhancement of approximately 16,000 times compared with the conventional speed, with a spatial resolution reduction of only 1/4.

  18. PIXE analysis and imaging of papyrus documents

    NASA Astrophysics Data System (ADS)

    Lövestam, N. E. Göran; Swietlicki, Erik

    1990-01-01

    The analysis of antique papyrus documents using an external milliprobe is described. Missing characters of text in the documents were made visible by means of PIXE analysis and X-ray imaging of the areas studied. The contrast between the papyrus and the ink was further increased when the information contained in all the elements was taken into account simultaneously using a multivariate technique (partial least-squares regression).

  19. Visualization of Parameter Space for Image Analysis

    PubMed Central

    Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.

    2013-01-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361

  20. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  1. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  2. Similarity measures of full polarimetric SAR images fusion for improved SAR image matching

    NASA Astrophysics Data System (ADS)

    Ding, H.

    2015-06-01

    China's first airborne SAR mapping system (CASMSAR) developed by Chinese Academy of Surveying and Mapping can acquire high-resolution and full polarimetric (HH, HV, VH and VV) Synthetic aperture radar (SAR) data. It has the ability to acquire X-band full polarimetric SAR data at a resolution of 0.5m. However, the existence of speckles which is inherent in SAR imagery affects visual interpretation and image processing badly, and challenges the assumption that conjugate points appear similar to each other in matching processing. In addition, researches show that speckles are multiplicative speckles, and most similarity measures of SAR image matching are sensitive to them. Thus, matching outcomes of SAR images acquired by most similarity measures are not reliable and with bad accuracy. Meanwhile, every polarimetric SAR image has different backscattering information of objects from each other and four polarimetric SAR data contain most basic and a large amount of redundancy information to improve matching. Therefore, we introduced logarithmically transformation and a stereo matching similarity measure into airborne full polarimetric SAR imagery. Firstly, in order to transform the multiplicative speckles into additivity ones and weaken speckles' influence on similarity measure, logarithmically transformation have to be taken to all images. Secondly, to prevent performance degradation of similarity measure caused by speckles, measure must be free or insensitive of additivity speckles. Thus, we introduced a stereo matching similarity measure, called Normalized Cross-Correlation (NCC), into full polarimetric SAR image matching. Thirdly, to take advantage of multi-polarimetric data and preserve the best similarity measure value, four measure values calculated between left and right single polarimetric SAR images are fused as final measure value for matching. The method was tested for matching under CASMSAR data. The results showed that the method delivered an effective

  3. Imaging effects related to language improvements by rTMS.

    PubMed

    Heiss, Wolf-Dieter

    2016-04-11

    The functional deficit after a focal brain lesion is determined by the localization and the extent of the tissue damage. Since destroyed tissue usually cannot be replaced in the adult human brain, improvement or recovery of neurological deficits can be achieved only by reactivation of functionally disturbed but morphologically preserved areas or by recruitment of alternative pathways within the functional network. The visualization of disturbed interaction in functional networks and of their reorganization in the recovery after focal brain damage is the domain of functional imaging modalities such as positron emission tomography (PET). Longitudinal assessments at rest and during activation tasks during the early and later periods following a stroke can demonstrate recruitment and compensatory mechanisms in the functional network responsible for complete or partial recovery of disturbed functions. Imaging studies have shown that improvements after focal cortical injury are represented over larger cortical territories. It has also been shown that the unaffected hemisphere in some instances actually inhibits the recovery of ipsilateral functional networks and this effect of transcallosal inhibition can be reduced by non-invasive brain stimutation. Non-invasive brain stimulation (NIBS) can modulate the excitability and activity of targeted cortical regions and thereby alter the interaction within pathologically affected functional networks; this kind of intervention might promote the adaptive cortical reorganization of functional networks after stroke. In poststroke aphasia several studies attempted to restore perilesional neuronal activity in the injured left inferior frontal gyrus by applying excitatory high frequency repetitive transcranial magnetic stimulation (rTMS) or intermittent theta burst stimulation (iTBS) or anodal transcranial direct current stimulation (tDCS), but most NIBS studies in poststroke aphasia employed inhibitory low frequency rTMS for

  4. Imaging effects related to language improvements by rTMS.

    PubMed

    Heiss, Wolf-Dieter

    2016-04-11

    The functional deficit after a focal brain lesion is determined by the localization and the extent of the tissue damage. Since destroyed tissue usually cannot be replaced in the adult human brain, improvement or recovery of neurological deficits can be achieved only by reactivation of functionally disturbed but morphologically preserved areas or by recruitment of alternative pathways within the functional network. The visualization of disturbed interaction in functional networks and of their reorganization in the recovery after focal brain damage is the domain of functional imaging modalities such as positron emission tomography (PET). Longitudinal assessments at rest and during activation tasks during the early and later periods following a stroke can demonstrate recruitment and compensatory mechanisms in the functional network responsible for complete or partial recovery of disturbed functions. Imaging studies have shown that improvements after focal cortical injury are represented over larger cortical territories. It has also been shown that the unaffected hemisphere in some instances actually inhibits the recovery of ipsilateral functional networks and this effect of transcallosal inhibition can be reduced by non-invasive brain stimutation. Non-invasive brain stimulation (NIBS) can modulate the excitability and activity of targeted cortical regions and thereby alter the interaction within pathologically affected functional networks; this kind of intervention might promote the adaptive cortical reorganization of functional networks after stroke. In poststroke aphasia several studies attempted to restore perilesional neuronal activity in the injured left inferior frontal gyrus by applying excitatory high frequency repetitive transcranial magnetic stimulation (rTMS) or intermittent theta burst stimulation (iTBS) or anodal transcranial direct current stimulation (tDCS), but most NIBS studies in poststroke aphasia employed inhibitory low frequency rTMS for

  5. Improved site contamination through time-lapse complex resistivity imaging

    NASA Astrophysics Data System (ADS)

    Flores Orozco, Adrian; Kemna, Andreas; Cassiani, Giorgio; Binley, Andrew

    2016-04-01

    In the framework of the EU FP7 project ModelPROBE, time-lapse complex resistivity (CR) measurements were conducted at a test site close to Trecate (NW Italy). The objective was to investigate the capabilities of the CR imaging method to delineate the geometry and dynamics of subsurface hydrocarbon contaminant plume which resulted from a crude oil spill in 1994. To achieve this it is required to discriminate the electrical signal associated to static (i.e., lithology) from dynamic changes in the subsurface, with the latter associated to significant seasonal groundwater fluctuations. Previous studies have demonstrated the benefits of the CR method to gain information which is not accessible with common electrical resistivity tomography. However field applications are still rarely and neither the analysis of the data error for CR time-lapse measurements, nor the inversion itself haven not received enough attention. While the ultimate objective at the site is to characterize, here we address the discrimination of the lithological and hydrological controls on the IP response by considering data collected in an uncontaminated area of the site. In this study we demonstrate that an adequate error description of CR measurements provides images free of artifacts and quantitative superior than previous approaches. Based on this approach, differential images computed for time-lapse data exhibited anomalies well correlated with spatiotemporal changes correlated to seasonal fluctuations in the groundwater level. The proposed analysis may be useful in the characterization of fate and transport of hydrocarbon contaminants relevant for the site, which presents areas contaminated with crude oil.

  6. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  7. Co-registration of ultrasound and frequency-domain photoacoustic radar images and image improvement for tumor detection

    NASA Astrophysics Data System (ADS)

    Dovlo, Edem; Lashkari, Bahman; Choi, Sung soo Sean; Mandelis, Andreas

    2015-03-01

    This paper demonstrates the co-registration of ultrasound (US) and frequency domain photoacoustic radar (FD-PAR) images with significant image improvement from applying image normalization, filtering and amplification techniques. Achieving PA imaging functionality on a commercial Ultrasound instrument could accelerate clinical acceptance and use. Experimental results presented demonstrate live animal testing and show enhancements in signal-to-noise ratio (SNR), contrast and spatial resolution. The co-registered image produced from the US and phase PA images, provides more information than both images independently.

  8. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  9. Super-resolution analysis for passive microwave images using FIPOCS

    NASA Astrophysics Data System (ADS)

    Wang, Xue; Wu, Jin; Wang, Jin; Adjouadi, Malek

    2013-03-01

    improve application of passive microwave imaging for object detection. In this study, we propose the FIPOCS (Fractal interpolation with Improved Projection onto Convex Sets) technique to enhance resolution. The experimental result shows that the resolution of passive microwave image is improved when utilizing the fractal interpolation to the LR image before applying the IPOCS technique.

  10. Digital imaging analysis to assess scar phenotype.

    PubMed

    Smith, Brian J; Nidey, Nichole; Miller, Steven F; Moreno Uribe, Lina M; Baum, Christian L; Hamilton, Grant S; Wehby, George L; Dunnwald, Martine

    2014-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive, and unbiased assessments of postsurgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue digital images of postsurgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD imaging system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software, and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with intraclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (p ranging from 1.20(-05) to 1.95(-14) ). Physicians' clinical outcome ratings from the same images showed high interobserver variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome.

  11. Ultrasonic image analysis for beef tenderness

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Thane, Brian R.; Whittaker, A. D.

    1993-05-01

    Objective measurement of meat tenderness has been a topic of concern for palatability evaluation. In this study, a real-time ultrasonic B-mode imaging method was used for measuring beef palatability attributes such as juiciness, muscle fiber tenderness, connective tissue amount, overall tenderness, flavor intensity, and percent total collagen noninvasively. A temporal averaging image enhancement method was used for image analysis. Ultrasonic image intensity, fractal dimension, attenuation, and statistical gray-tone spatial-dependence matrix image texture measurement were analyzed. The contrast of the textural feature was the most correlated parameter with palatability attributes. The longitudinal scanning method was better for juiciness, muscle fiber tenderness, flavor intensity, and percent soluble collagen, whereas, the cross-sectional method was better for connective tissue, overall tenderness. The multivariate linear regression models were developed as a function of textural features and image intensity parameters. The determinant coefficients of regression models were for juiciness (R2 equals .97), for percent total collagen (R2 equals .88), for flavor intensity (R2 equals .75), for muscle fiber tenderness (R2 equals .55), and for overall tenderness (R2 equals .49), respectively.

  12. Digital imaging analysis to assess scar phenotype

    PubMed Central

    Smith, Brian J.; Nidey, Nichole; Miller, Steven F.; Moreno, Lina M.; Baum, Christian L.; Hamilton, Grant S.; Wehby, George L.; Dunnwald, Martine

    2015-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive and unbiased assessments of post-surgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue (RGB) digital images of post-surgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD image system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with interclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (P ranging from 1.20−05 to 1.95−14). Physicians’ clinical outcome ratings from the same images showed high inter-observer variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome. PMID:24635173

  13. Nonlinear Programming shallow tomography improves deep structure imaging

    NASA Astrophysics Data System (ADS)

    Li, J.; Morozov, I.

    2004-05-01

    In areas with strong variations in topography or near-surface lithology, conventional seismic data processing methods do not produce clear images, neither shallow nor deep. The conventional reflection data processing methods do not resolve stacking velocities at very shallow depth; however, refraction tomography can be used to obtain the near-surface velocities. We use Nonlinear Programming (NP) via known velocity and depth in points from shallow boreholes and outcrop as well as derivation of slowness as constraint conditions to gain accurate shallow velocities. We apply this method to a 2D reflection survey shot across the Flame Mountain, a typical mountain with high gas reserve volume in Western China, by PetroChina and BGP in 1990s. The area has a highly rugged topography with strong variations of lithology near the surface. Over its hillside, the quality of reflection data is very good, but on the mountain ridge, reflection quality is poorer. Because of strong noise, only the first breaks are clear in the records, with velocities varying by more than 3 times in the near offsets. Because this region contains a steep cliff and an overthrust fold, it is very difficult to find a standard refraction horizon, therefore, GLI refractive statics conventional field and residual statics do not result in a good image. Our processing approach includes: 1) The Herglotz-Wiechert method to derive a starting velocity model which is better than horizontal velocity model; 2) using shallow boreholes and geological data, construct smoothness constraints on the velocity field as well as; 3) perform tomographic velocity inversion by NP algorithm; 4) by using the resulting accurate shallow velocities, derive the statics to correct the seismic data for the complex near-surface velocity variations. The result indicates that shallow refraction tomography can greatly improve deep seismic images in complex surface conditions.

  14. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, Stanley W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution.

  15. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, S.W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution. 2 figs.

  16. Point counting on the Macintosh. A semiautomated image analysis technique.

    PubMed

    Gatlin, C L; Schaberg, E S; Jordan, W H; Kuyatt, B L; Smith, W C

    1993-10-01

    In image analysis, point counting is used to estimate three-dimensional quantitative parameters from sets of measurements made on two-dimensional images. Point counting is normally conducted either by hand only or manually through a planimeter. We developed a semiautomated, Macintosh-based method of point counting. This technique could be useful for any point counting application in which the image can be digitized. We utilized this technique to demonstrate increased vacuolation in white matter tracts of rat brains, but it could be used on many other types of tissue. Volume fractions of vacuoles within the corpus callosum of rat brains were determined by analyzing images of histologic sections. A stereologic grid was constructed using the Claris MacDraw II software. The grid was modified for optimum line density and size in Adobe Photoshop, electronically superimposed onto the images and sampled using version 1.37 of NIH Image public domain software. This technique was further automated by the creation of a macro (small program) to create the grid, overlay the grid on a predetermined image, threshold the objects of interest and count thresholded objects at intersections of the grid lines. This method is expected to significantly reduce the amount of time required to conduct point counting and to improve the consistency of counts.

  17. Gun bore flaw image matching based on improved SIFT descriptor

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  18. Bispecific Antibody Pretargeting for Improving Cancer Imaging and Therapy

    SciTech Connect

    Sharkey, Robert M.

    2005-02-04

    The main objective of this project was to evaluate pretargeting systems that use a bispecific antibody (bsMAb) to improve the detection and treatment of cancer. A bsMAb has specificity to a tumor antigen, which is used to bind the tumor, while the other specificity is to a peptide that can be radiolabeled. Pretargeting is the process by which the unlabeled bsMAb is given first, and after a sufficient time (1-2 days) is given for it to localize in the tumor and clear from the blood, a small molecular weight radiolabeled peptide is given. According to a dynamic imaging study using a 99mTc-labeled peptide, the radiolabeled peptide localizes in the tumor in less than 1 hour, with > 80% of it clearing from the blood and body within this same time. Tumor/nontumor targeting ratios that are nearly 50 times better than that with a directly radiolabeled Fab fragment have been observed (Sharkey et al., ''Signal amplification in molecular imaging by a multivalent bispecific nanobody'' submitted). The bsMAbs used in this project have been composed of 3 antibodies that will target antigens found in colorectal and pancreatic cancers (CEA, CSAp, and MUC1). For the ''peptide binding moiety'' of the bsMAb, we initially examined an antibody directed to DOTA, but subsequently focused on another antibody directed against a novel compound, HSG (histamine-succinyl-glycine).

  19. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  20. Symmetric subspace learning for image analysis.

    PubMed

    Papachristou, Konstantinos; Tefas, Anastasios; Pitas, Ioannis

    2014-12-01

    Subspace learning (SL) is one of the most useful tools for image analysis and recognition. A large number of such techniques have been proposed utilizing a priori knowledge about the data. In this paper, new subspace learning techniques are presented that use symmetry constraints in their objective functions. The rational behind this idea is to exploit the a priori knowledge that geometrical symmetry appears in several types of data, such as images, objects, faces, and so on. Experiments on artificial, facial expression recognition, face recognition, and object categorization databases highlight the superiority and the robustness of the proposed techniques, in comparison with standard SL techniques.

  1. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  2. Autonomous Image Analysis for Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.

    1999-01-01

    To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images

  3. Improved Signal Chains for Readout of CMOS Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Hancock, Bruce; Cunningham, Thomas

    2009-01-01

    An improved generic design has been devised for implementing signal chains involved in readout from complementary metal oxide/semiconductor (CMOS) image sensors and for other readout integrated circuits (ICs) that perform equivalent functions. The design applies to any such IC in which output signal charges from the pixels in a given row are transferred simultaneously into sampling capacitors at the bottoms of the columns, then voltages representing individual pixel charges are read out in sequence by sequentially turning on column-selecting field-effect transistors (FETs) in synchronism with source-follower- or operational-amplifier-based amplifier circuits. The improved design affords the best features of prior source-follower-and operational- amplifier-based designs while overcoming the major limitations of those designs. The limitations can be summarized as follows: a) For a source-follower-based signal chain, the ohmic voltage drop associated with DC bias current flowing through the column-selection FET causes unacceptable voltage offset, nonlinearity, and reduced small-signal gain. b) For an operational-amplifier-based signal chain, the required bias current and the output noise increase superlinearly with size of the pixel array because of a corresponding increase in the effective capacitance of the row bus used to couple the sampled column charges to the operational amplifier. The effect of the bus capacitance is to simultaneously slow down the readout circuit and increase noise through the Miller effect.

  4. Guided Wave Annular Array Sensor Design for Improved Tomographic Imaging

    NASA Astrophysics Data System (ADS)

    Koduru, Jaya Prakash; Rose, Joseph L.

    2009-03-01

    Guided wave tomography for structural health monitoring is fast emerging as a reliable tool for the detection and monitoring of hotspots in a structure, for any defects arising from corrosion, crack growth etc. To date guided wave tomography has been successfully tested on aircraft wings, pipes, pipe elbows, and weld joints. Structures practically deployed are subjected to harsh environments like exposure to rain, changes in temperature and humidity. A reliable tomography system should take into account these environmental factors to avoid false alarms. The lack of mode control with piezoceramic disk sensors makes it very sensitive to traces of water leading to false alarms. In this study we explore the design of annular array sensors to provide mode control for improved structural tomography, in particular, addressing the false alarm potential of water loading. Clearly defined actuation lines in the phase velocity dispersion curve space are calculated. A dominant in-plane displacement point is found to provide a solution to the water loading problem. The improvement in the tomographic images with the annular array sensors in the presence of water traces is clearly illustrated with a series of experiments. An annular array design philosophy for other problems in NDE/SHM is also discussed.

  5. Morphological analysis of infrared images for waterjets

    NASA Astrophysics Data System (ADS)

    Gong, Yuxin; Long, Aifang

    2013-03-01

    High-speed waterjet has been widely used in industries and been investigated as a model of free shearing turbulence. This paper presents an investigation involving the flow visualization of high speed water jet, the noise reduction of the raw thermogram using a high-pass morphological filter ? and a median filter; the image enhancement using white top-hat filter; and the image segmentation using the multiple thresholding method. The image processing results by the designed morphological filters, ? - top-hat, were proved being ideal for further quantitative and in-depth analysis and can be used as a new morphological filter bank that may be of general implications for the analogous work

  6. Atmospheric Imaging Assembly Multithermal Loop Analysis: First Results

    NASA Astrophysics Data System (ADS)

    Schmelz, J. T.; Kimble, J. A.; Jenkins, B. S.; Worley, B. T.; Anderson, D. J.; Pathak, S.; Saar, S. H.

    2010-12-01

    The Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory has state-of-the-art spatial resolution and shows the most detailed images of coronal loops ever observed. The series of coronal filters peak at different temperatures, which span the range of active regions. These features represent a significant improvement over earlier coronal imagers and make AIA ideal for multithermal analysis. Here, we targeted a 171 Å coronal loop in AR 11092 observed by AIA on 2010 August 3. Isothermal analysis using the 171-to-193 ratio gave a temperature of log T ≈ 6.1, similar to the results of Extreme ultraviolet Imaging Spectrograph (EIT) and TRACE. Differential emission measure analysis, however, showed that the plasma was multithermal, not isothermal, with the bulk of the emission measure at log T > 6.1. The result from the isothermal analysis, which is the average of the true plasma distribution weighted by the instrument response functions, appears to be deceptively low. These results have potentially serious implications: EIT and TRACE results, which use the same isothermal method, show substantially smaller temperature gradients than predicted by standard models for loops in hydrodynamic equilibrium and have been used as strong evidence in support of footpoint heating models. These implications may have to be re-examined in the wake of new results from AIA.

  7. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  8. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox

    PubMed Central

    Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry

  9. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox.

    PubMed

    Ribeiro, Andre Santos; Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19-73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry and

  10. Sensitivity improvement of a molecular imaging technique based on magnetic nanoparticles

    NASA Astrophysics Data System (ADS)

    Ishihara, Yasutoshi; Kuwabara, Tsuyoshi; Wadamori, Naoki

    2011-03-01

    Magnetic particle imaging (MPI) using the nonlinear interaction between internally administered magnetic nanoparticles (MNPs) and electromagnetic waves irradiated from outside of the body has attracted attention for the early diagnosis of diseases such as cancer. In MPI, the local magnetic field distribution is scanned, and the magnetization signal from MNPs inside an object region is detected. However, the signal sensitivity and image resolution are degraded by interference from the magnetization signal generated by MNPs that exist outside of the desired region, owing to nonlinear responses. Earlier, we proposed an image reconstruction method for suppressing the interference component while emphasizing the signal component using the property of the higher harmonic components generated by the MNPs. However, edge areas in the reconstructed image were emphasized excessively owing to the high-pass-filter effect of this method. Here, we propose a new method based on correlation information between the observed signal and a system function. We performed a numerical analysis and found that, although the image was somewhat blurred, the detection sensitivity can clearly be improved without the inverse-matrix operation used in conventional image reconstruction.

  11. Development of a Novel Breast Cancer Detector based on Improved Holography Concave Grating Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Zeng, Lvming; Huang, Zhen

    2011-01-01

    Breast cancer can be detected by B-mode ultrasonic imaging, X-mammography, CT imaging, and MRI. But some drawbacks existed in these methods, their applications was limited in some certain. So, a novel high resolution breast cancer detector (BCD) is developed in this paper. Meanwhile, an improved holography concave grating imaging spectrometer (HCGIS) is designed. In this HCGIS, the holography concave grating is used as the diffraction grating. Additionally, CCD with combined image acquisition (IAQ) card and the 3D scan platform are used as the spectral image acquisition component. This BCD consists of the light source unit, light-path unit, check cavity, splitting-light unit, spectrum acquisition and imaging unit, signal processing unit, computer and data analysis software unit, etc. Experimental results show that the spectral range of the novel BCD can reach 300-1000 nm, its wavelength resolution can reach 1nm, and this system uses the back-split-light technology and the splitting-light structure of holography concave grating. Compared with the other instruments of breast cancer detection, this BCD has many advantages, such as, compacter volume, simpler algorithm, faster processing speed, higher accuracy, cheaper cost and higher resolution, etc. Therefore, this BCD will have the potential values in the detection of breast disease.

  12. Compact Microscope Imaging System With Intelligent Controls Improved

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The Compact Microscope Imaging System (CMIS) with intelligent controls is a diagnostic microscope analysis tool with intelligent controls for use in space, industrial, medical, and security applications. This compact miniature microscope, which can perform tasks usually reserved for conventional microscopes, has unique advantages in the fields of microscopy, biomedical research, inline process inspection, and space science. Its unique approach integrates a machine vision technique with an instrumentation and control technique that provides intelligence via the use of adaptive neural networks. The CMIS system was developed at the NASA Glenn Research Center specifically for interface detection used for colloid hard spheres experiments; biological cell detection for patch clamping, cell movement, and tracking; and detection of anode and cathode defects for laboratory samples using microscope technology.

  13. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  14. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  15. Traking of Laboratory Debris Flow Fronts with Image Analysis

    NASA Astrophysics Data System (ADS)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Fischer, Jan-Thomas; Scheidl, Christian; Pudasaini, Shiva P.

    2015-04-01

    Image analysis technique is applied to track the time evolution of rapid debris flow fronts and their velocities in laboratory experiments. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural slopes. The laboratory model consists of a large rectangular channel 1.4m wide and 10m long, with adjustable inclination and other flow configurations. The setup allows investigate different two phase material compositions including large fluid fractions. The large size enables to transfer the results to large-scale natural events providing increased measurement accuracy. The images are captured by a high speed camera, a standard digital camera. The fronts are tracked by the camera to obtain data in debris flow experiments. The reflectance analysis detects the debris front in every image frame; its presence changes the reflectance at a certain pixel location during the flow. The accuracy of the measurements was improved with a camera calibration procedure. As one of the great problems in imaging and analysis, the systematic distortions of the camera lens are contained in terms of radial and tangential parameters. The calibration procedure estimates the optimal values for these parameters. This allows us to obtain physically correct and undistorted image pixels. Then, we map the images onto a physical model geometry, which is the projective photogrammetry, in which the image coordinates are connected with the object space coordinates of the flow. Finally, the physical model geometry is rewritten in the direct linear transformation form, which allows for the conversion from one to another coordinate system. With our approach, the debris front position can then be estimated by combining the reflectance, calibration and the linear transformation. The consecutive debris front

  16. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  17. A chaos-based digital image encryption scheme with an improved diffusion strategy.

    PubMed

    Fu, Chong; Chen, Jun-jie; Zou, Hao; Meng, Wei-hong; Zhan, Yong-feng; Yu, Ya-wen

    2012-01-30

    Chaos-based image cipher has been widely investigated over the last decade or so to meet the increasing demand for real-time secure image transmission over public networks. In this paper, an improved diffusion strategy is proposed to promote the efficiency of the most widely investigated permutation-diffusion type image cipher. By using the novel bidirectional diffusion strategy, the spreading process is significantly accelerated and hence the same level of security can be achieved with fewer overall encryption rounds. Moreover, to further enhance the security of the cryptosystem, a plain-text related chaotic orbit turbulence mechanism is introduced in diffusion procedure by perturbing the control parameter of the employed chaotic system according to the cipher-pixel. Extensive cryptanalysis has been performed on the proposed scheme using differential analysis, key space analysis, various statistical analyses and key sensitivity analysis. Results of our analyses indicate that the new scheme has a satisfactory security level with a low computational complexity, which renders it a good candidate for real-time secure image transmission applications.

  18. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  19. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  20. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  1. Discriminating enumeration of subseafloor life using automated fluorescent image analysis

    NASA Astrophysics Data System (ADS)

    Morono, Y.; Terada, T.; Masui, N.; Inagaki, F.

    2008-12-01

    Enumeration of microbial cells in marine subsurface sediments has provided fundamental information for understanding the extent of life and deep-biosphere on Earth. The microbial population has been manually evaluated by direct cell count under the microscopy because the recognition of cell-derived fluorescent signals has been extremely difficult. Here, we improved the conventional method by removing the non- specific fluorescent backgrounds and enumerated the cell population in sediments using a newly developed automated microscopic imaging system. Although SYBR Green I is known to specifically bind to the double strand DNA (Lunau et al., 2005), we still observed some SYBR-stainable particulate matters (SYBR-SPAMs) in the heat-sterilized control sediments (450°C, 6h), which assumed to be silicates or mineralized organic matters. Newly developed acid-wash treatments with hydrofluoric acid (HF) followed by image analysis successfully removed these background objects and yielded artifact-free microscopic images. To obtain statistically meaningful fluorescent images, we constructed a computer-assisted automated cell counting system. Given the comparative data set of cell abundance in acid-washed marine sediments evaluated by SYBR Green I- and acridine orange (AO)-stain with and without the image analysis, our protocol could provide the statistically meaningful absolute numbers of discriminating cell-derived fluorescent signals.

  2. Improving the performance of lysimeters with thermal imaging

    NASA Astrophysics Data System (ADS)

    Voortman, Bernard; Bartholomeus, Ruud; Witte, Jan-Philip

    2014-05-01

    of lysimeters can be monitored and improved with the aid of thermal imaging. For example, estimated ET based on thermal images of moss lysimeters showed a model efficiency (Nash Sutcliffe) of 0.94 and 0.93 and a RMSE of 0.02 and 0.03 mm for method 1 and 2 respectively. However, for bare sand the model efficiency was much lower: 0.54 and 0.59 with a RMSE of 0.53 and 0.60 mm for method 1 and 2 respectively. This poor performance is caused by the large variation in albedo of bare sand under moist and dry conditions, affecting the available energy for evaporation.

  3. Image analysis of blood platelets adhesion.

    PubMed

    Krízová, P; Rysavá, J; Vanícková, M; Cieslar, P; Dyr, J E

    2003-01-01

    Adhesion of blood platelets is one of the major events in haemostatic and thrombotic processes. We studied adhesion of blood platelets on fibrinogen and fibrin dimer sorbed on solid support material (glass, polystyrene). Adhesion was carried on under static and dynamic conditions and measured as percentage of the surface covered with platelets. Within a range of platelet counts in normal and in thrombocytopenic blood we observed a very significant decrease in platelet adhesion on fibrin dimer with bounded active thrombin with decreasing platelet count. Our results show the imperative use of platelet poor blood preparations as control samples in experiments with thrombocytopenic blood. Experiments carried on adhesive surfaces sorbed on polystyrene showed lower relative inaccuracy than on glass. Markedly different behaviour of platelets adhered on the same adhesive surface, which differed only in support material (glass or polystyrene) suggest that adhesion and mainly spreading of platelets depends on physical quality of the surface. While on polystyrene there were no significant differences between fibrin dimer and fibrinogen, adhesion measured on glass support material markedly differed between fibrin dimer and fibrinogen. We compared two methods of thresholding in image analysis of adhered platelets. Results obtained by image analysis of spreaded platelets showed higher relative inaccuracy than results obtained by image analysis of platelets centres and aggregates.

  4. Image Segmentation By Cluster Analysis Of High Resolution Textured SPOT Images

    NASA Astrophysics Data System (ADS)

    Slimani, M.; Roux, C.; Hillion, A.

    1986-04-01

    Textural analysis is now a commonly used technique in digital image processing. In this paper, we present an application of textural analysis to high resolution SPOT satellite images. The purpose of the methodology is to improve classification results, i.e. image segmentation in remote sensing. Remote sensing techniques, based on high resolution satellite data offer good perspectives for the cartography of littoral environment. Textural information contained in the pan-chromatic channel of ten meters resolution is introduced in order to separate different types of structures. The technique we used is based on statistical pattern recognition models and operates in two steps. A first step, features extraction, is derived by using a stepwise algorithm. Segmentation is then performed by cluster analysis using these extracted. features. The texture features are computed over the immediate neighborhood of the pixel using two methods : the cooccurence matrices method and the grey level difference statistics method. Image segmentation based only on texture features is then performed by pixel classification and finally discussed. In a future paper, we intend to compare the results with aerial data in view of the management of the littoral resources.

  5. Analysis of variance in spectroscopic imaging data from human tissues.

    PubMed

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  6. Improving the analysis of slug tests

    USGS Publications Warehouse

    McElwee, C.D.

    2002-01-01

    This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.

  7. Improved fusing infrared and electro-optic signals for high-resolution night images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor

    2012-06-01

    Electro-optic (EO) images exhibit the properties of high resolution and low noise level, while it is a challenge to distinguish objects with infrared (IR), especially for objects with similar temperatures. In earlier work, we proposed a novel framework for IR image enhancement based on the information (e.g., edge) from EO images. Our framework superimposed the detected edges of the EO image with the corresponding transformed IR image. Obviously, this framework resulted in better resolution IR images that help distinguish objects at night. For our IR image system, we used the theoretical point spread function (PSF) proposed by Russell C. Hardie et al., which is composed of the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we designed an inverse filter based on the proposed PSF to transform the IR image. In this paper, blending the detected edge of the EO image with the corresponding transformed IR image and the original IR image is the principal idea for improving the previous framework. This improved framework requires four main steps: (1) inverse filter-based IR image transformation, (2) image edge detection, (3) images registration, and (4) blending of the corresponding images. Simulation results show that blended IR images have better quality over the superimposed images that were generated under the previous framework. Based on the same steps, the simulation result shows a blended IR image of better quality when only the original IR image is available.

  8. Improving Intelligence Analysis With Decision Science.

    PubMed

    Dhami, Mandeep K; Mandel, David R; Mellers, Barbara A; Tetlock, Philip E

    2015-11-01

    Intelligence analysis plays a vital role in policy decision making. Key functions of intelligence analysis include accurately forecasting significant events, appropriately characterizing the uncertainties inherent in such forecasts, and effectively communicating those probabilistic forecasts to stakeholders. We review decision research on probabilistic forecasting and uncertainty communication, drawing attention to findings that could be used to reform intelligence processes and contribute to more effective intelligence oversight. We recommend that the intelligence community (IC) regularly and quantitatively monitor its forecasting accuracy to better understand how well it is achieving its functions. We also recommend that the IC use decision science to improve these functions (namely, forecasting and communication of intelligence estimates made under conditions of uncertainty). In the case of forecasting, decision research offers suggestions for improvement that involve interventions on data (e.g., transforming forecasts to debias them) and behavior (e.g., via selection, training, and effective team structuring). In the case of uncertainty communication, the literature suggests that current intelligence procedures, which emphasize the use of verbal probabilities, are ineffective. The IC should, therefore, leverage research that points to ways in which verbal probability use may be improved as well as exploring the use of numerical probabilities wherever feasible.

  9. Reticle defect sizing of optical proximity correction defects using SEM imaging and image analysis techniques

    NASA Astrophysics Data System (ADS)

    Zurbrick, Larry S.; Wang, Lantian; Konicek, Paul; Laird, Ellen R.

    2000-07-01

    Sizing of programmed defects on optical proximity correction (OPC) feature sis addressed using high resolution scanning electron microscope (SEM) images and image analysis techniques. A comparison and analysis of different sizing methods is made. This paper addresses the issues of OPC defect definition and discusses the experimental measurement results obtained by SEM in combination with image analysis techniques.

  10. Improving GPR image resolution in lossy ground using dispersive migration

    USGS Publications Warehouse

    Oden, C.P.; Powers, M.H.; Wright, D.L.; Olhoeft, G.R.

    2007-01-01

    As a compact wave packet travels through a dispersive medium, it becomes dilated and distorted. As a result, ground-penetrating radar (GPR) surveys over conductive and/or lossy soils often result in poor image resolution. A dispersive migration method is presented that combines an inverse dispersion filter with frequency-domain migration. The method requires a fully characterized GPR system including the antenna response, which is a function of the local soil properties for ground-coupled antennas. The GPR system response spectrum is used to stabilize the inverse dispersion filter. Dispersive migration restores attenuated spectral components when the signal-to-noise ratio is adequate. Applying the algorithm to simulated data shows that the improved spatial resolution is significant when data are acquired with a GPR system having 120 dB or more of dynamic range, and when the medium has a loss tangent of 0.3 or more. Results also show that dispersive migration provides no significant advantage over conventional migration when the loss tangent is less than 0.3, or when using a GPR system with a small dynamic range. ?? 2007 IEEE.

  11. Multispectral laser imaging for advanced food analysis

    NASA Astrophysics Data System (ADS)

    Senni, L.; Burrascano, P.; Ricci, M.

    2016-07-01

    A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.

  12. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  13. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  14. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession. PMID:23343236

  15. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  16. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    SciTech Connect

    Drukker, Karen Giger, Maryellen L.; Li, Hui; Duewer, Fred; Malkov, Serghei; Joe, Bonnie; Kerlikowske, Karla; Shepherd, John A.; Flowers, Chris I.; Drukteinis, Jennifer S.

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CB alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.

  17. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  18. Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.

    2015-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and

  19. Bayesian Analysis Of HMI Solar Image Observables And Comparison To TSI Variations And MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.

    2014-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and

  20. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  1. Covariance of Lucky Images: Performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2016-09-01

    The covariance of ground-based Lucky Images (COELI) is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper we analyze the relevance of the number of processed frames, the frames quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  2. PAMS photo image retrieval prototype alternatives analysis

    SciTech Connect

    Conner, M.L.

    1996-04-30

    Photography and Audiovisual Services uses a system called the Photography and Audiovisual Management System (PAMS) to perform order entry and billing services. The PAMS system utilizes Revelation Technologies database management software, AREV. Work is currently in progress to link the PAMS AREV system to a Microsoft SQL Server database engine to provide photograph indexing and query capabilities. The link between AREV and SQLServer will use a technique called ``bonding.`` This photograph imaging subsystem will interface to the PAMS system and handle the image capture and retrieval portions of the project. The intent of this alternatives analysis is to examine the software and hardware alternatives available to meet the requirements for this project, and identify a cost-effective solution.

  3. [Imaging Mass Spectrometry in Histopathologic Analysis].

    PubMed

    Yamazaki, Fumiyoshi; Seto, Mitsutoshi

    2015-04-01

    Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development. PMID:26536781

  4. Self-assessed performance improves statistical fusion of image labels

    SciTech Connect

    Bryan, Frederick W. Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-03-15

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  5. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  6. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  7. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  8. Stem Cell Imaging: Tools to Improve Cell Delivery and Viability

    PubMed Central

    Wang, Junxin; Jokerst, Jesse V.

    2016-01-01

    Stem cell therapy (SCT) has shown very promising preclinical results in a variety of regenerative medicine applications. Nevertheless, the complete utility of this technology remains unrealized. Imaging is a potent tool used in multiple stages of SCT and this review describes the role that imaging plays in cell harvest, cell purification, and cell implantation, as well as a discussion of how imaging can be used to assess outcome in SCT. We close with some perspective on potential growth in the field. PMID:26880997

  9. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  10. Bone feature analysis using image processing techniques.

    PubMed

    Liu, Z Q; Austin, T; Thomas, C D; Clement, J G

    1996-01-01

    In order to establish the correlation between bone structure and age, and information about age-related bone changes, it is necessary to study microstructural features of human bone. Traditionally, in bone biology and forensic science, the analysis if bone cross-sections has been carried out manually. Such a process is known to be slow, inefficient and prone to human error. Consequently, the results obtained so far have been unreliable. In this paper we present a new approach to quantitative analysis of cross-sections of human bones using digital image processing techniques. We demonstrate that such a system is able to extract various bone features consistently and is capable of providing more reliable data and statistics for bones. Consequently, we will be able to correlate features of bone microstructure with age and possibly also with age related bone diseases such as osteoporosis. The development of knowledge-based computer vision-systems for automated bone image analysis can now be considered feasible.

  11. Analysis-Driven Lossy Compression of DNA Microarray Images.

    PubMed

    Hernández-Cabronero, Miguel; Blanes, Ian; Pinho, Armando J; Marcellin, Michael W; Serra-Sagristà, Joan

    2016-02-01

    DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1.

  12. A new imaging technique for reliable migration velocity analysis

    SciTech Connect

    Duquet, B.; Ehinger, A.; Lailly, P.

    1994-12-31

    In case of severe lateral velocity variations prestack depth migration is not suitable for migration velocity analysis. The authors therefore propose to substitute prestack depth migration by prestack imaging by coupled linearized inversion (PICLI). Results obtained with the Marmousi model show the improvement offered by this method for migration velocity analysis. PICLI involves a huge amount of computation. Hence they have paid special attention both to the solution of the forward problem and to the optimization algorithm. To simplify the forward problem they make use of paraxial approximations of the wave equation. Efficiency in the optimization algorithm is obtained by an exact calculation of the gradient by means of the adjoint state technique and by an adequate preconditioning. Doing so the above mentioned improvement is obtained at reasonable cost.

  13. Patient-adaptive lesion metabolism analysis by dynamic PET images.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2012-01-01

    Dynamic PET imaging provides important spatial-temporal information for metabolism analysis of organs and tissues, and generates a great reference for clinical diagnosis and pharmacokinetic analysis. Due to poor statistical properties of the measurement data in low count dynamic PET acquisition and disturbances from surrounding tissues, identifying small lesions inside the human body is still a challenging issue. The uncertainties in estimating the arterial input function will also limit the accuracy and reliability of the metabolism analysis of lesions. Furthermore, the sizes of the patients and the motions during PET acquisition will yield mismatch against general purpose reconstruction system matrix, this will also affect the quantitative accuracy of metabolism analyses of lesions. In this paper, we present a dynamic PET metabolism analysis framework by defining a patient adaptive system matrix to improve the lesion metabolism analysis. Both patient size information and potential small lesions are incorporated by simulations of phantoms of different sizes and individual point source responses. The new framework improves the quantitative accuracy of lesion metabolism analysis, and makes the lesion identification more precisely. The requirement of accurate input functions is also reduced. Experiments are conducted on Monte Carlo simulated data set for quantitative analysis and validation, and on real patient scans for assessment of clinical potential. PMID:23286175

  14. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  15. Digital processing to improve image quality in real-time neutron radiography

    NASA Astrophysics Data System (ADS)

    Fujine, Shigenori; Yoneda, Kenji; Kanda, Keiji

    1985-01-01

    Real-time neutron radiography (NTV) has been used for practical applications at the Kyoto University Reactor (KUR). At present, however, the direct image from the TV system is still poor in resolution and low in contrast. In this paper several image improvements are demonstrated, such as a frame summing technique, which are effective in increasing image quality in neutron radiography. Image integration before the A/D converter has a beneficial effect on image quality and the high quality image reveals details invisible in direct images, such as: small holes by a reversed image, defects in a neutron converter screen through a high quality image, a moving object in a contoured image, a slight difference between two low-contrast images by a subtraction technique, and so on. For the real-time application a contouring operation and an averaging approach can also be utilized effectively.

  16. A novel approach to scatter-free imaging for the improvement of breast cancer Detection

    NASA Astrophysics Data System (ADS)

    Green, F. H.; Veale, M. C.; Wilson, M. D.; Seller, P.; Scuffham, J.; Pani, S.

    2014-12-01

    Compton scattering is one of the main causes of image degradation in X-ray imaging. This is particularly noticeable in mammography where details of interest feature low contrast in comparison to the surrounding tissue. This work shows the feasibility of obtaining scatter-free images by using a quasi-monochromatic X-ray beam and a pixellated spectroscopic detector. This work presents characterisation of the imaging system and quantitative imaging data of a low contrast test object. An improvement in contrast by 8% was observed compared to images obtained including scattered radiation. Comparison with a conventional setup showed an increase in the image quality factor when scatter has been removed.

  17. Improving performance of real-time multispectral imaging system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A real-time multispectral imaging system can be a science-based tool for fecal and ingesta contaminant detection during poultry processing. For the implementation of this imaging system at commercial poultry processing plant, false positive errors must be removed. For doing this, we tested and imp...

  18. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  19. A novel chaotic map and an improved chaos-based image encryption scheme.

    PubMed

    Zhang, Xianhan; Cao, Yang

    2014-01-01

    In this paper, we present a novel approach to create the new chaotic map and propose an improved image encryption scheme based on it. Compared with traditional classic one-dimensional chaotic maps like Logistic Map and Tent Map, this newly created chaotic map demonstrates many better chaotic properties for encryption, implied by a much larger maximal Lyapunov exponent. Furthermore, the new chaotic map and Arnold's Cat Map based image encryption method is designed and proved to be of solid robustness. The simulation results and security analysis indicate that such method not only can meet the requirement of imagine encryption, but also can result in a preferable effectiveness and security, which is usable for general applications.

  20. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  1. Computerized PET/CT image analysis in the evaluation of tumour response to therapy

    PubMed Central

    Wang, J; Zhang, H H

    2015-01-01

    Current cancer therapy strategy is mostly population based, however, there are large differences in tumour response among patients. It is therefore important for treating physicians to know individual tumour response. In recent years, many studies proposed the use of computerized positron emission tomography/CT image analysis in the evaluation of tumour response. Results showed that computerized analysis overcame some major limitations of current qualitative and semiquantitative analysis and led to improved accuracy. In this review, we summarize these studies in four steps of the analysis: image registration, tumour segmentation, image feature extraction and response evaluation. Future works are proposed and challenges described. PMID:25723599

  2. Improving the Performance of Three-Mirror Imaging Systems with Freeform Optics

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Wolbach, Steven

    2013-01-01

    The image quality improvement for three-mirror systems by Freeform Optics is surveyed over various f-number and field specifications. Starting with the Korsch solution, we increase the surface shape degrees of freedom and record the improvements.

  3. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  4. Image interpretation for landforms using expert systems and terrain analysis

    SciTech Connect

    Al-garni, A.M.

    1992-01-01

    Most current research in digital photogrammetry and computer vision concentrates on the early vision process; little research has been done on the late vision process. The late vision process in image interpretation contains descriptive knowledge and heuristic information which requires artificial intelligence (AI) in order for it to be modeled. This dissertation has introduced expert systems, as AI tools, and terrain analysis to the late vision process. This goal has been achieved by selecting and theorizing landforms as features for image interpretation. These features present a wide spectrum of knowledge that can furnish a good foundation for image interpretation processes. In this dissertation an EXpert system for LANdform interpretation using Terrain analysis (EXPLANT) was developed. EXPLANT can interpret the major landforms on earth. The system contains sample military and civilian consultations regarding site analysis and development. A learning mechanism was developed to accommodate necessary improvement for the data base due to the rapidly advancing and dynamically changing technology and knowledge. Many interface facilities and menu-driven screens were developed in the system to aid the users. Finally, the system has been tested and verified to be working properly.

  5. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press. PMID:27606187

  6. Progress on retinal image analysis for age related macular degeneration.

    PubMed

    Kanagasingam, Yogesan; Bhuiyan, Alauddin; Abràmoff, Michael D; Smith, R Theodore; Goldschmidt, Leonard; Wong, Tien Y

    2014-01-01

    Age-related macular degeneration (AMD) is the leading cause of vision loss in those over the age of 50 years in the developed countries. The number is expected to increase by ∼1.5 fold over the next ten years due to an increase in aging population. One of the main measures of AMD severity is the analysis of drusen, pigmentary abnormalities, geographic atrophy (GA) and choroidal neovascularization (CNV) from imaging based on color fundus photograph, optical coherence tomography (OCT) and other imaging modalities. Each of these imaging modalities has strengths and weaknesses for extracting individual AMD pathology and different imaging techniques are used in combination for capturing and/or quantification of different pathologies. Current dry AMD treatments cannot cure or reverse vision loss. However, the Age-Related Eye Disease Study (AREDS) showed that specific anti-oxidant vitamin supplementation reduces the risk of progression from intermediate stages (defined as the presence of either many medium-sized drusen or one or more large drusen) to late AMD which allows for preventative strategies in properly identified patients. Thus identification of people with early stage AMD is important to design and implement preventative strategies for late AMD, and determine their cost-effectiveness. A mass screening facility with teleophthalmology or telemedicine in combination with computer-aided analysis for large rural-based communities may identify more individuals suitable for early stage AMD prevention. In this review, we discuss different imaging modalities that are currently being considered or used for screening AMD. In addition, we look into various automated and semi-automated computer-aided grading systems and related retinal image analysis techniques for drusen, geographic atrophy and choroidal neovascularization detection and/or quantification for measurement of AMD severity using these imaging modalities. We also review the existing telemedicine studies which

  7. Improvement of Rocket Engine Plume Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Smith, S. D.

    1982-01-01

    A nozzle plume flow field code was developed. The RAMP code which was chosen as the basic code is of modular construction and has the following capabilities: two phase with two phase transonic solution; a two phase, reacting gas (chemical equilibrium reaction kinetics), supersonic inviscid nozzle/plume solution; and is operational for inviscid solutions at both high and low altitudes. The following capabilities were added to the code: a direct interface with JANNAF SPF code; shock capturing finite difference numerical operator; two phase, equilibrium/frozen, boundary layer analysis; a variable oxidizer to fuel ratio transonic solution; an improved two phase transonic solution; and a two phase real gas semiempirical nozzle boundary layer expansion.

  8. Evaluation of improvement of diffuse optical imaging of brain function by high-density probe arrangements and imaging algorithms

    NASA Astrophysics Data System (ADS)

    Sakakibara, Yusuke; Kurihara, Kazuki; Okada, Eiji

    2016-04-01

    Diffuse optical imaging has been applied to measure the localized hemodynamic responses to brain activation. One of the serious problems with diffuse optical imaging is the limitation of the spatial resolution caused by the sparse probe arrangement and broadened spatial sensitivity profile for each probe pair. High-density probe arrangements and an image reconstruction algorithm considering the broadening of the spatial sensitivity can improve the spatial resolution of the image. In this study, the diffuse optical imaging of the absorption change in the brain is simulated to evaluate the effect of the high-density probe arrangements and imaging methods. The localization error, equivalent full-width half maximum and circularity of the absorption change in the image obtained by the mapping and reconstruction methods from the data measured by five probe arrangements are compared to quantitatively evaluate the imaging methods and probe arrangements. The simple mapping method is sufficient for the density of the measurement points up to the double-density probe arrangement. The image reconstruction method considering the broadening of the spatial sensitivity of the probe pairs can effectively improve the spatial resolution of the image obtained from the probe arrangements higher than the quadruple density, in which the distance between the neighboring measurement points is 10.6 mm.

  9. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.

    PubMed

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  10. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study

    PubMed Central

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  11. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  12. 77 FR 27463 - Device Improvements for Pediatric X-Ray Imaging; Public Meeting; Request for Comments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-10

    ... HUMAN SERVICES Food and Drug Administration Device Improvements for Pediatric X-Ray Imaging; Public... ``Pediatric Information for X-ray Imaging Device Premarket Notifications.'' This guidance will apply to x-ray... issues relevant to radiation safety in pediatric x-ray imaging that may benefit from...

  13. Exponential filtering of singular values improves photoacoustic image reconstruction.

    PubMed

    Bhatt, Manish; Gutta, Sreedevi; Yalavarthy, Phaneendra K

    2016-09-01

    Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering. PMID:27607501

  14. Multimodal Imaging for Improved Diagnosis and Treatment of Cancers

    PubMed Central

    Tempany, Clare; Jayender, Jagadeesan; Kapur, Tina; Bueno, Raphael; Golby, Alexandra; Agar, Nathalie; Jolesz, Ferenc

    2014-01-01

    This article reviews methods for image-guided diagnosis and therapy that increase precision in detection, characterization, and localization of many forms of cancer to achieve optimal target definition and complete resection or ablation. We present a new model of translational clinical image guided therapy research and describe the Advanced Multimodality Image Guided Operating (AMIGO) suite. AMIGO was conceived and designed to allow for the full integration of imaging in cancer diagnosis and treatment. We draw examples from over 500 cases performed on brain, neck, spine, thorax (breast, lung), and pelvis (prostate and gynecologic areas) and describe how they address some of the many challenges of treating brain, prostate and lung tumors. PMID:25204551

  15. Multimodal imaging for improved diagnosis and treatment of cancers.

    PubMed

    Tempany, Clare M C; Jayender, Jagadeesan; Kapur, Tina; Bueno, Raphael; Golby, Alexandra; Agar, Nathalie; Jolesz, Ferenc A

    2015-03-15

    The authors review methods for image-guided diagnosis and therapy that increase precision in the detection, characterization, and localization of many forms of cancer to achieve optimal target definition and complete resection or ablation. A new model of translational, clinical, image-guided therapy research is presented, and the Advanced Multimodality Image-Guided Operating (AMIGO) suite is described. AMIGO was conceived and designed to allow for the full integration of imaging in cancer diagnosis and treatment. Examples are drawn from over 500 procedures performed on brain, neck, spine, thorax (breast, lung), and pelvis (prostate and gynecologic) areas and are used to describe how they address some of the many challenges of treating brain, prostate, and lung tumors. Cancer 2015;121:817-827. © 2014 American Cancer Society. PMID:25204551

  16. Image stretching on a curved surface to improve satellite gridding

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.

    1975-01-01

    A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.

  17. Ontology modularization to improve semantic medical image annotation.

    PubMed

    Wennerberg, Pinar; Schulz, Klaus; Buitelaar, Paul

    2011-02-01

    Searching for medical images and patient reports is a significant challenge in a clinical setting. The contents of such documents are often not described in sufficient detail thus making it difficult to utilize the inherent wealth of information contained within them. Semantic image annotation addresses this problem by describing the contents of images and reports using medical ontologies. Medical images and patient reports are then linked to each other through common annotations. Subsequently, search algorithms can more effectively find related sets of documents on the basis of these semantic descriptions. A prerequisite to realizing such a semantic search engine is that the data contained within should have been previously annotated with concepts from medical ontologies. One major challenge in this regard is the size and complexity of medical ontologies as annotation sources. Manual annotation is particularly time consuming labor intensive in a clinical environment. In this article we propose an approach to reducing the size of clinical ontologies for more efficient manual image and text annotation. More precisely, our goal is to identify smaller fragments of a large anatomy ontology that are relevant for annotating medical images from patients suffering from lymphoma. Our work is in the area of ontology modularization, which is a recent and active field of research. We describe our approach, methods and data set in detail and we discuss our results.

  18. [An improved motion estimation of medical image series via wavelet transform].

    PubMed

    Zhang, Ying; Rao, Nini; Wang, Gang

    2006-10-01

    The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333

  19. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  20. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  1. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es.

  2. Spectroscopic Imaging with Improved Gradient Modulated Constant Adiabadicity Pulses on High-Field Clinical Scanners

    PubMed Central

    Andronesi, Ovidiu C.; Ramadan, Saadallah; Ratai, Eva-Maria; Jennings, Dominique; Mountford, Carolyn E.; Sorensen, A. Gregory

    2011-01-01

    The purpose of this work was to design and implement constant adiabadicity gradient modulated pulses that have improved slice profiles and reduced artifacts for spectroscopic imaging on 3T clinical scanners equipped with standard hardware. The newly proposed pulses were designed using the gradient offset independent adiabaticity (GOIA, Tannus and Garwood, 1997) method using WURST modulation for RF and gradient waveforms. The GOIA-WURST pulses were compared with GOIA-HSn (GOIA based on nth-order hyperbolic secant) and FOCI (Frequency Offset Corrected Inversion) pulses of the same bandwidth and duration. Numerical simulations and experimental measurements in phantoms and healthy volunteers are presented. GOIA-WURST pulses provide improved slice profile that have less slice smearing for off-resonance frequencies compared to GOIA-HSn pulses. The peak RF amplitude of GOIA-WURST is much lower (40% less) than FOCI but slightly higher (14.9% more) to GOIA-HSn. The quality of spectra as shown by the analysis of line-shapes, eddy currents artifacts, subcutaneous lipid contamination and SNR is improved for GOIA-WURST. GOIA-WURST pulse tested in this work shows that reliable spectroscopic imaging could be obtained in routine clinical setup and might facilitate the use of clinical spectroscopy. PMID:20163975

  3. Improving the accuracy of volumetric segmentation using pre-processing boundary detection and image reconstruction.

    PubMed

    Archibald, Rick; Hu, Jiuxiang; Gelb, Anne; Farin, Gerald

    2004-04-01

    The concentration edge -detection and Gegenbauer image-reconstruction methods were previously shown to improve the quality of segmentation in magnetic resonance imaging. In this study, these methods are utilized as a pre-processing step to the Weibull E-SD field segmentation. It is demonstrated that the combination of the concentration edge detection and Gegenbauer reconstruction method improves the accuracy of segmentation for the simulated test data and real magnetic resonance images used in this study. PMID:15376580

  4. Live imaging and analysis of postnatal mouse retinal development

    PubMed Central

    2013-01-01

    Background The explanted, developing rodent retina provides an efficient and accessible preparation for use in gene transfer and pharmacological experimentation. Many of the features of normal development are retained in the explanted retina, including retinal progenitor cell proliferation, heterochronic cell production, interkinetic nuclear migration, and connectivity. To date, live imaging in the developing retina has been reported in non-mammalian and mammalian whole-mount samples. An integrated approach to rodent retinal culture/transfection, live imaging, cell tracking, and analysis in structurally intact explants greatly improves our ability to assess the kinetics of cell production. Results In this report, we describe the assembly and maintenance of an in vitro, CO2-independent, live mouse retinal preparation that is accessible by both upright and inverted, 2-photon or confocal microscopes. The optics of this preparation permit high-quality and multi-channel imaging of retinal cells expressing fluorescent reporters for up to 48h. Tracking of interkinetic nuclear migration within individual cells, and changes in retinal progenitor cell morphology are described. Follow-up, hierarchical cluster screening revealed that several different dependent variable measures can be used to identify and group movement kinetics in experimental and control samples. Conclusions Collectively, these methods provide a robust approach to assay multiple features of rodent retinal development using live imaging. PMID:23758927

  5. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  6. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  7. Stokes vector analysis of adaptive optics images of the retina.

    PubMed

    Song, Hongxin; Zhao, Yanming; Qi, Xiaofeng; Chui, Yuenping Toco; Burns, Stephen A

    2008-01-15

    A high-resolution Stokes vector imaging polarimeter was developed to measure the polarization properties at the cellular level in living human eyes. The application of this cellular level polarimetric technique to in vivo retinal imaging has allowed us to measure depolarization in the retina and to improve the retinal image contrast of retinal structures based on their polarization properties. PMID:18197217

  8. Improvement of temporal resolution in blood concentration imaging using NIR speckle patterns

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Shimatani, Yuichi; Kyoso, Masaki; Funamizu, Hideki; Aizu, Yoshihisa

    2013-06-01

    In the imaging of blood concentration change using near infrared bio-speckles, temporal averaging of speckle images is necessary for speckle reduction. To improve the temporal resolution in blood concentration imaging, use of spatial averaging is investigated to measured data in rat experiments. Results show that three frames in temporal averaging with (2×2) pixels in spatial averaging can be accepted to obtain the temporal resolution of ten concentration images per second.

  9. Automated Digital Image Analysis of islet cell mass using Nikon's inverted Eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2013-04-29

    Reliable assessment of islet viability, mass and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples,but this technique may be susceptible to inter / intra observer variability, which may induce false positive / negative islet counts. Here we describe a simple, reliable, automated digitalimage analysis (ADIA) technique, for accurately quantifying islets into total islet number,islet equivalent number (IEQ), and islet purity before islet transplantation.Islets were isolated and purified from n=42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone,and expressed as IEQ number. Islets were analyzed manually by microscopy, or automaticallyquantified using Nikon's inverted Eclipse Ti microscope, with built in NIS-ElementsAdvanced Research (AR) software.The AIDA method significantly enhanced the number of islet preparations eligible forengraftment compared to the standard manual method (P<0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(sup)2(/sup)≤0.91), and total islet number (r(sup)2(/sup)=0.88), and thus, increased to (r(sup)2(/sup)=0.93) when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intra-observer reproducibility compared to the standard manual method (P<0.001). However, islet purity was routinely estimated as significantly higher with the manual method vs. the ADIA method(p<0.001). The ADIA method also detected small islets between 10-50 μm in size.Automated digital image analysis utilizing the Nikon Instruments (Nikon) software is anunbiased, simple, and reliable teaching tool to comprehensively assess the individual size ofeach islet cell preparation prior to transplantation. Implementation of

  10. Satellite image fusion based on principal component analysis and high-pass filtering.

    PubMed

    Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E

    2010-06-01

    This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image. PMID:20508708

  11. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  12. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-11-15

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  13. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    PubMed Central

    Sidky, Emil Y.; Pan, Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-01-01

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness whenp=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging. PMID:19994501

  14. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  15. Improving the image and quantitative data of magnetic resonance imaging through hardware and physics techniques

    NASA Astrophysics Data System (ADS)

    Kaggie, Joshua D.

    In Chapter 1, an introduction to basic principles or MRI is given, including the physical principles, basic pulse sequences, and basic hardware. Following the introduction, five different published and yet unpublished papers for improving the utility of MRI are shown. Chapter 2 discusses a small rodent imaging system that was developed for a clinical 3 T MRI scanner. The system integrated specialized radiofrequency (RF) coils with an insertable gradient, enabling 100 microm isotropic resolution imaging of the guinea pig cochlea in vivo, doubling the body gradient strength, slew rate, and contrast-to-noise ratio, and resulting in twice the signal-to-noise (SNR) when compared to the smallest conforming birdcage. Chapter 3 discusses a system using BOLD MRI to measure T2* and invasive fiberoptic probes to measure renal oxygenation (pO2). The significance of this experiment is that it demonstrated previously unknown physiological effects on pO2, such as breath-holds that had an immediate (<1 sec) pO2 decrease (˜6 mmHg), and bladder pressure that had pO2 increases (˜6 mmHg). Chapter 4 determined the correlation between indicators of renal health and renal fat content. The R2 correlation between renal fat content and eGFR, serum cystatin C, urine protein, and BMI was less than 0.03, with a sample size of ˜100 subjects, suggesting that renal fat content will not be a useful indicator of renal health. Chapter 5 is a hardware and pulse sequence technique for acquiring multinuclear 1H and 23Na data within the same pulse sequence. Our system demonstrated a very simple, inexpensive solution to SMI and acquired both nuclei on two 23Na channels using external modifications, and is the first demonstration of radially acquired SMI. Chapter 6 discusses a composite sodium and proton breast array that demonstrated a 2-5x improvement in sodium SNR and similar proton SNR when compared to a large coil with a linear sodium and linear proton channel. This coil is unique in that sodium

  16. Thermal image analysis for detecting facemask leakage

    NASA Astrophysics Data System (ADS)

    Dowdall, Jonathan B.; Pavlidis, Ioannis T.; Levine, James

    2005-03-01

    Due to the modern advent of near ubiquitous accessibility to rapid international transportation the epidemiologic trends of highly communicable diseases can be devastating. With the recent emergence of diseases matching this pattern, such as Severe Acute Respiratory Syndrome (SARS), an area of overt concern has been the transmission of infection through respiratory droplets. Approved facemasks are typically effective physical barriers for preventing the spread of viruses through droplets, but breaches in a mask"s integrity can lead to an elevated risk of exposure and subsequent infection. Quality control mechanisms in place during the manufacturing process insure that masks are defect free when leaving the factory, but there remains little to detect damage caused by transportation or during usage. A system that could monitor masks in real-time while they were in use would facilitate a more secure environment for treatment and screening. To fulfill this necessity, we have devised a touchless method to detect mask breaches in real-time by utilizing the emissive properties of the mask in the thermal infrared spectrum. Specifically, we use a specialized thermal imaging system to detect minute air leakage in masks based on the principles of heat transfer and thermodynamics. The advantage of this passive modality is that thermal imaging does not require contact with the subject and can provide instant visualization and analysis. These capabilities can prove invaluable for protecting personnel in scenarios with elevated levels of transmission risk such as hospital clinics, border check points, and airports.

  17. Improved Phased Array Imaging of a Model Jet

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Podboy, Gary G.

    2010-01-01

    An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.

  18. Improved field free line magnetic particle imaging using saddle coils.

    PubMed

    Erbe, Marlitt; Sattel, Timo F; Buzug, Thorsten M

    2013-12-01

    Magnetic particle imaging (MPI) is a novel tracer-based imaging method detecting the distribution of superparamagnetic iron oxide (SPIO) nanoparticles in vivo in three dimensions and in real time. Conventionally, MPI uses the signal emitted by SPIO tracer material located at a field free point (FFP). To increase the sensitivity of MPI, however, an alternative encoding scheme collecting the particle signal along a field free line (FFL) was proposed. To provide the magnetic fields needed for line imaging in MPI, a very efficient scanner setup regarding electrical power consumption is needed. At the same time, the scanner needs to provide a high magnetic field homogeneity along the FFL as well as parallel to its alignment to prevent the appearance of artifacts, using efficient radon-based reconstruction methods arising for a line encoding scheme. This work presents a dynamic FFL scanner setup for MPI that outperforms all previously presented setups in electrical power consumption as well as magnetic field quality.

  19. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  20. Precision Improvement of Photogrammetry by Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Shih, Ming-Hsiang; Sung, Wen-Pei; Tung, Shih-Heng; Hsiao, Hanwei

    2016-04-01

    The combination of aerial triangulation technology and unmanned aerial vehicle greatly reduces the cost and application threshold of the digital surface model technique. Based on the report in the literatures, the measurement error in the x-y coordinate and in the elevation lies between 8cm~15cm and 10cm~20cm respectively. The measurement accuracy for the geological structure survey already has sufficient value, but for the slope and structures in terms of deformation monitoring is inadequate. The main factors affecting the accuracy of the aerial triangulation are image quality, measurement accuracy of control point and image matching accuracy. In terms of image matching, the commonly used techniques are Harris Corner Detection and Scale Invariant Feature Transform (SIFT). Their pairing error is in scale of pixels, usually lies between 1 to 2 pixels. This study suggests that the error on the pairing is the main factor causing the aerial triangulation errors. Therefore, this study proposes the application of Digital Image Correlation (DIC) method instead of the pairing method mentioned above. DIC method can provide a pairing accuracy of less than 0.01 pixel, indeed can greatly enhance the accuracy of the aerial triangulation, to have sub-centimeter level accuracy. In this study, the effects of image pairing error on the measurement error of the 3-dimensional coordinate of the ground points are explored by numerical simulation method. It was confirmed that when the image matching error is reduced to 0.01 pixels, the ground three-dimensional coordinate measurement error can be controlled in mm level. A combination of DIC technique and the traditional aerial triangulation provides the potential of application on the deformation monitoring of slope and structures, and achieve an early warning of natural disaster.

  1. Roentgen stereophotogrammetric analysis using computer-based image-analysis.

    PubMed

    Ostgaard, S E; Gottlieb, L; Toksvig-Larsen, S; Lebech, A; Talbot, A; Lund, B

    1997-09-01

    The two-dimensional position of markers in radiographs for Roentgen Stereophotogrammetric Analysis (RSA) is usually determined using a measuring table. The purpose of this study was to evaluate the reproducibility and the accuracy of a new RSA system using digitized radiographs and image-processing algorithms to determine the marker position in the radiographs. Four double-RSA examinations of a phantom and 18 RSA examinations from six patients included in different RSA-studies of knee prostheses were used to test the reproducibility and the accuracy of the system. The radiographs were scanned at 600 dpi resolution and 256 gray levels. The center of each of the tantalum-markers in the radiographs was calculated by the computer program from the contour of the marker with the use of an edge-detection software algorithm after the marker was identified on a PC monitor. The study showed that computer-based image analysis can be used in RSA-examinations. The advantages of using image-processing software in RSA are that the marker positions are determined in an objective manner, and that there is no need for a systematic manual identification of all the markers on the radiograph before the actual measurement.

  2. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. PMID:26556680

  3. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  4. Multiparametric Image Analysis of Lung Branching Morphogenesis

    PubMed Central

    Schnatwinkel, Carsten; Niswander, Lee

    2013-01-01

    BACKGROUND Lung branching morphogenesis is a fundamental developmental process, yet the cellular dynamics that occur during lung development and the molecular mechanisms underlying recent postulated branching modes are poorly understood. RESULTS Here, we implemented a time-lapse video microscopy method to study the cellular behavior and molecular mechanisms of planar bifurcation and domain branching in lung explant- and organotypic cultures. Our analysis revealed morphologically distinct stages that are shaped at least in part by a combination of localized and orientated cell divisions and by local mechanical forces. We also identified myosin light-chain kinase as an important regulator of bud bifurcation, but not domain branching in lung explants. CONCLUSION This live imaging approach provides a method to study cellular behavior during lung branching morphogenesis and suggests the importance of a mechanism primarily based on oriented cell proliferation and mechanical forces in forming and shaping the developing lung airways. PMID:23483685

  5. Molecular imaging of atherosclerosis for improving diagnostic and therapeutic development

    PubMed Central

    Quillard, Thibaut; Libby, Peter

    2012-01-01

    Despite recent progress, cardiovascular and allied metabolic disorders remain a worldwide health challenge. We need to identify new targets for therapy, develop new agents for clinical use, and deploy them in a clinically-effective and cost-effective manner. Molecular imaging of atherosclerotic lesions has become a major experimental tool in the last decade, notably by providing a direct gateway to the processes involved in atherogenesis and its complications. This review summarizes the current status of molecular imaging approaches that target the key processes implicated in plaque formation, development, and disruption, and highlights how the refinement and application of such tools might aid the development and evaluation of novel therapeutics. PMID:22773426

  6. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  7. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT).

    PubMed

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-21

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3 ± 1.0 and 3.3 ± 3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target.

  8. Improved Crack Type Classification Neural Network based on Square Sub-images of Pavement Surface

    NASA Astrophysics Data System (ADS)

    Lee, Byoung Jik; Lee, Hosin “David”

    The previous neural network based on the proximity values was developed using rectangular pavement images. However, the proximity value derived from the rectangular image was biased towards transverse cracking. By sectioning the rectangular image into a set of square sub-images, the neural network based on the proximity value became more robust and consistent in determining a crack type. This paper presents an improved neural network to determine a crack type from a pavement surface image based on square sub-images over the neural network trained using rectangular pavement images. The advantage of using square sub-image is demonstrated by using sample images of transverse cracking, longitudinal cracking and alligator cracking.

  9. High resolution ultraviolet imaging spectrometer for latent image analysis.

    PubMed

    Lyu, Hang; Liao, Ningfang; Li, Hongsong; Wu, Wenmin

    2016-03-21

    In this work, we present a close-range ultraviolet imaging spectrometer with high spatial resolution, and reasonably high spectral resolution. As the transmissive optical components cause chromatic aberration in the ultraviolet (UV) spectral range, an all-reflective imaging scheme is introduced to promote the image quality. The proposed instrument consists of an oscillating mirror, a Cassegrain objective, a Michelson structure, an Offner relay, and a UV enhanced CCD. The finished spectrometer has a spatial resolution of 29.30μm on the target plane; the spectral scope covers both near and middle UV band; and can obtain approximately 100 wavelength samples over the range of 240~370nm. The control computer coordinates all the components of the instrument and enables capturing a series of images, which can be reconstructed into an interferogram datacube. The datacube can be converted into a spectrum datacube, which contains spectral information of each pixel with many wavelength samples. A spectral calibration is carried out by using a high pressure mercury discharge lamp. A test run demonstrated that this interferometric configuration can obtain high resolution spectrum datacube. The pattern recognition algorithm is introduced to analyze the datacube and distinguish the latent traces from the base materials. This design is particularly good at identifying the latent traces in the application field of forensic imaging.

  10. Improving the Self-Image of the Socially Disabled

    ERIC Educational Resources Information Center

    Matthews, Lillian B.

    1975-01-01

    Reviewing the literature on physical attractiveness' relationship to selself-image and social acceptability, the author points out the need for self-care courses as "social therapy," gives a step-by-step procedure to develop such programs for the institutionalized, and tells how to become a teacher in a social therapy program. (AJ)

  11. Indeterminacy and Image Improvement in Snake Infrared ``Vision''

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2006-03-01

    Many snake species have infrared sense organs located on their head that can detect warm-blooded prey even in total darkness. The physical mechanism underlying this sense is that of a pinhole camera. The infrared image is projected onto a sensory `pit membrane' of small size (of order mm^2). To get a neuronal response the energy flux per unit time has to exceed a minimum threshold; furthermore, the source of this energy, the prey, is moving at a finite speed so the pinhole substituting for a lens has to be rather large (˜1 mm). Accordingly the image is totally blurred. We have therefore done two things. First, we have determined the precise optical resolution that a snake can achieve for a given input. Second, in view of known, though still restricted, precision one may ask whether, and how, a snake can reconstruct the original image. The point is that the information needed to reconstruct the original temperature distribution in space is still available. We present an explicit mathematical model [1] allowing even high-quality reconstruction from the low-quality image on the pit membrane and indicate how a neuronal implementation might be realized. Ref: [1] A.B. Sichert, P. Friedel, and J.L. van Hemmen, TU Munich preprint (2005).

  12. Correlation between the signal-to-noise ratio improvement factor (KSNR) and clinical image quality for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Saunderson, J. R.; Beavis, A. W.

    2015-12-01

    This work assessed the appropriateness of the signal-to-noise ratio improvement factor (KSNR) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients  =  80). The KSNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. KSNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and KSNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of  -0.93 (p  =  0.015) was found for lung, a coefficient (R) of  -0.95 (p  =  0.46) was found for spine, and a coefficient (R) of  -0.85 (p  =  0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. KSNR increases with tube voltage but VGAS decreases. Medical physicists should use the KSNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely.

  13. Cryptanalysis of "an improvement over an image encryption method based on total shuffling"

    NASA Astrophysics Data System (ADS)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2015-09-01

    In the past two decades, several image encryption algorithms based on chaotic systems had been proposed. Many of the proposed algorithms are meant to improve other chaos based and conventional cryptographic algorithms. Whereas, many of the proposed improvement methods suffer from serious security problems. In this paper, the security of the recently proposed improvement method for a chaos-based image encryption algorithm is analyzed. The results indicate the weakness of the analyzed algorithm against chosen plain-text.

  14. Imaging biomarkers in multiple Sclerosis: From image analysis to population imaging.

    PubMed

    Barillot, Christian; Edan, Gilles; Commowick, Olivier

    2016-10-01

    The production of imaging data in medicine increases more rapidly than the capacity of computing models to extract information from it. The grand challenges of better understanding the brain, offering better care for neurological disorders, and stimulating new drug design will not be achieved without significant advances in computational neuroscience. The road to success is to develop a new, generic, computational methodology and to confront and validate this methodology on relevant diseases with adapted computational infrastructures. This new concept sustains the need to build new research paradigms to better understand the natural history of the pathology at the early phase; to better aggregate data that will provide the most complete representation of the pathology in order to better correlate imaging with other relevant features such as clinical, biological or genetic data. In this context, one of the major challenges of neuroimaging in clinical neurosciences is to detect quantitative signs of pathological evolution as early as possible to prevent disease progression, evaluate therapeutic protocols or even better understand and model the natural history of a given neurological pathology. Many diseases encompass brain alterations often not visible on conventional MRI sequences, especially in normal appearing brain tissues (NABT). MRI has often a low specificity for differentiating between possible pathological changes which could help in discriminating between the different pathological stages or grades. The objective of medical image analysis procedures is to define new quantitative neuroimaging biomarkers to track the evolution of the pathology at different levels. This paper illustrates this issue in one acute neuro-inflammatory pathology: Multiple Sclerosis (MS). It exhibits the current medical image analysis approaches and explains how this field of research will evolve in the next decade to integrate larger scale of information at the temporal, cellular

  15. Acne image analysis: lesion localization and classification

    NASA Astrophysics Data System (ADS)

    Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.

    2016-03-01

    Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.

  16. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  17. Image quality improvement in megavoltage cone beam CT using an imaging beam line and a sintered pixelated array system

    SciTech Connect

    Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon; Bani-Hashemi, Ali; Anderson, Carryn M.; Bhatia, Sudershan K.; Stiles, Jared; Edwards, Drake S.; Flynn, Ryan T.

    2011-11-15

    Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to which the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging

  18. Improving the convergence rate in affine registration of PET and SPECT brain images using histogram equalization.

    PubMed

    Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A

    2013-01-01

    A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

  19. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  20. Automated image analysis method to detect and quantify macrovesicular steatosis in human liver hematoxylin and eosin-stained histology images

    PubMed Central

    Nativ, Nir I.; Chen, Alvin I.; Yarmush, Gabriel; Henry, Scot D.; Lefkowitch, Jay H.; Klein, Kenneth M.; Maguire, Timothy J.; Schloss, Rene; Guarrera, James V.; Berthiaume, Francois; Yarmush, Martin L.

    2014-01-01

    Large-droplet macrovesicular steatosis (ld-MaS) in over 30% of the liver graft hepatocytes is a major risk factor in liver transplantation. An accurate assessment of ld-MaS percentage is crucial to determine liver graft transplantability, which is currently based on pathologists’ evaluations of hematoxylin and eosin (H&E) stained liver histology specimens, with the predominant criteria being the lipid droplets’ (LDs) relative size and their propensity to displace the hepatocyte’s nucleus to the cell periphery. Automated image analysis systems aimed at objectively and reproducibly quantifying ld-MaS do not accurately differentiate large LDs from small-droplet macrovesicular steatosis (sd-MaS) and do not take into account LD-mediated nuclear displacement, leading to poor correlation with pathologists’ assessment. Here we present an improved image analysis method that incorporates nuclear displacement as a key image feature to segment and classify ld-MaS from H&E stained liver histology slides. More than 52,000 LDs in 54 digital images from 9 patients were analyzed, and the performance of the proposed method was compared against that of current image analysis methods and the ld-MaS percentage evaluations of two trained pathologists from different centers. We show that combining nuclear displacement and LD size information significantly improves the separation between large and small macrovesicular LDs (specificity=93.7%, sensitivity=99.3%) and the correlation with the pathologists’ ld-MaS percentage assessment (R2=0.97). This performance vastly exceeds that of other automated image analyzers, which typically underestimate or overestimate the pathologists’ ld-MaS score. This work demonstrates the potential of automated ld-MaS analysis in monitoring the steatotic state of livers. The image analysis principles demonstrated here may help standardize ld-MaS scores among centers and ultimately help in the process of determining liver graft transplantability

  1. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  2. A virtual laboratory for medical image analysis.

    PubMed

    Olabarriaga, Sílvia D; Glatard, Tristan; de Boer, Piter T

    2010-07-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented architecture enables decoupling the user-friendly clients running on the user's workstation from the complexity of the grid applications and infrastructure. Data are stored on grid resources and can be browsed/viewed interactively by the user with the Virtual Resource Browser (VBrowser). Data analysis pipelines are described as Scufl workflows and enacted on the grid infrastructure transparently using the MOTEUR workflow management system. VBrowser plug-ins allow for easy experiment monitoring and error detection. Because of the strict compliance to the grid authentication model, all operations are performed on behalf of the user, ensuring basic security and facilitating collaboration across organizations. The system has been operational and in daily use for eight months (December 2008), with six users, leading to the submission of 9000 jobs/month in average and the production of several terabytes of data.

  3. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis. PMID:27526188

  4. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  5. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    PubMed

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection.

  6. Dynamic chest image analysis: model-based pulmonary perfusion analysis with pyramid images

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Haapanen, Arto; Jaervi, Timo; Kiuru, Aaro J.; Kormano, Martti; Svedstrom, Erkki; Virkki, Raimo

    1998-07-01

    The aim of the study 'Dynamic Chest Image Analysis' is to develop computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected at different phases of the respiratory/cardiac cycles in a short period of time. We have proposed a framework for ventilation study with an explicit ventilation model based on pyramid images. In this paper, we extend the framework to pulmonary perfusion study. A perfusion model and the truncated pyramid are introduced. The perfusion model aims at extracting accurate, geographic perfusion parameters, and the truncated pyramid helps in understanding perfusion at multiple resolutions and speeding up the convergence process in optimization. Three cases are included to illustrate the experimental results.

  7. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  8. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    NASA Astrophysics Data System (ADS)

    Tsow, Alex

    A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.

  9. Improved sliced velocity map imaging apparatus optimized for H photofragments

    SciTech Connect

    Ryazanov, Mikhail; Reisler, Hanna

    2013-04-14

    Time-sliced velocity map imaging (SVMI), a high-resolution method for measuring kinetic energy distributions of products in scattering and photodissociation reactions, is challenging to implement for atomic hydrogen products. We describe an ion optics design aimed at achieving SVMI of H fragments in a broad range of kinetic energies (KE), from a fraction of an electronvolt to a few electronvolts. In order to enable consistently thin slicing for any imaged KE range, an additional electrostatic lens is introduced in the drift region for radial magnification control without affecting temporal stretching of the ion cloud. Time slices of {approx}5 ns out of a cloud stretched to Greater-Than-Or-Slanted-Equal-To 50 ns are used. An accelerator region with variable dimensions (using multiple electrodes) is employed for better optimization of radial and temporal space focusing characteristics at each magnification level. The implemented system was successfully tested by recording images of H fragments from the photodissociation of HBr, H{sub 2}S, and the CH{sub 2}OH radical, with kinetic energies ranging from <0.4 eV to >3 eV. It demonstrated KE resolution Less-Than-Or-Equivalent-To 1%-2%, similar to that obtained in traditional velocity map imaging followed by reconstruction, and to KE resolution achieved previously in SVMI of heavier products. We expect it to perform just as well up to at least 6 eV of kinetic energy. The tests showed that numerical simulations of the electric fields and ion trajectories in the system, used for optimization of the design and operating parameters, provide an accurate and reliable description of all aspects of system performance. This offers the advantage of selecting the best operating conditions in each measurement without the need for additional calibration experiments.

  10. Retinex Image Processing: Improved Fidelity To Direct Visual Observation

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.

    1996-01-01

    Recorded color images differ from direct human viewing by the lack of dynamic range compression and color constancy. Research is summarized which develops the center/surround retinex concept originated by Edwin Land through a single scale design to a multi-scale design with color restoration (MSRCR). The MSRCR synthesizes dynamic range compression, color constancy, and color rendition and, thereby, approaches fidelity to direct observation.

  11. Improved sliced velocity map imaging apparatus optimized for H photofragments.

    PubMed

    Ryazanov, Mikhail; Reisler, Hanna

    2013-04-14

    Time-sliced velocity map imaging (SVMI), a high-resolution method for measuring kinetic energy distributions of products in scattering and photodissociation reactions, is challenging to implement for atomic hydrogen products. We describe an ion optics design aimed at achieving SVMI of H fragments in a broad range of kinetic energies (KE), from a fraction of an electronvolt to a few electronvolts. In order to enable consistently thin slicing for any imaged KE range, an additional electrostatic lens is introduced in the drift region for radial magnification control without affecting temporal stretching of the ion cloud. Time slices of ∼5 ns out of a cloud stretched to ⩾50 ns are used. An accelerator region with variable dimensions (using multiple electrodes) is employed for better optimization of radial and temporal space focusing characteristics at each magnification level. The implemented system was successfully tested by recording images of H fragments from the photodissociation of HBr, H2S, and the CH2OH radical, with kinetic energies ranging from <0.4 eV to >3 eV. It demonstrated KE resolution ≲1%-2%, similar to that obtained in traditional velocity map imaging followed by reconstruction, and to KE resolution achieved previously in SVMI of heavier products. We expect it to perform just as well up to at least 6 eV of kinetic energy. The tests showed that numerical simulations of the electric fields and ion trajectories in the system, used for optimization of the design and operating parameters, provide an accurate and reliable description of all aspects of system performance. This offers the advantage of selecting the best operating conditions in each measurement without the need for additional calibration experiments.

  12. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  13. Hyperspectral Image Target Detection Improvement Based on Total Variation.

    PubMed

    Yang, Shuo; Shi, Zhenwei

    2016-05-01

    For the hyperspectral target detection, the neighbors of a target pixel are very likely to be target pixels, and those of a background pixel are very likely to be background pixels. In order to utilize this spatial homogeneity or smoothness, based on total variation (TV), we propose a novel supervised target detection algorithm which uses a single target spectrum as the prior knowledge. TV can make the image smooth, and has been widely used in image denoising and restoration. The proposed algorithm uses TV to keep the spatial homogeneity or smoothness of the detection output. Meanwhile, a constraint is used to guarantee the spectral signature of the target unsuppressed. The final formulated detection model is an ℓ1-norm convex optimization problem. The split Bregman algorithm is used to solve our optimization problem, as it can solve the ℓ1-norm optimization problem efficiently. Two synthetic and two real hyperspectral images are used to do experiments. The experimental results demonstrate that the proposed algorithm outperforms the other algorithms for the experimental data sets. The experimental results also show that even when the target occupies only one pixel, the proposed algorithm can still obtain good results. This is because in such a case, the background is kept smooth, but at the same time, the algorithm allows for sharp edges in the detection output. PMID:27019489

  14. Representative Image Subsets in Soil Analysis Using the Mars Exploration Rover Microscopic Imager

    NASA Astrophysics Data System (ADS)

    Cabrol, N. A.; Herkenhoff, K. E.; Grin, E. A.

    2009-12-01

    Assessing texture and morphology is a critical step in evaluating the plausible origin and evolution of soil particles. Both are essential to the understanding of martian soils beyond Gusev and Meridiani. In addition to supporting rover operations, what is being learned at both landing sites about soil physical characteristics and properties provides essential keys to model with more precision the nature of martian soils at global scale from the correlation of ground-based and orbital data. Soil and particles studies will improve trafficability predictions for future missions, whether robotic or human, ultimately increasing safety and mission productivity. Data can also be used to assist in pre-mission hardware testing, and during missions to support engineering activities including rover extrication, which makes the characterization of soils at the particle level and their mixing critical. On Mars, this assessment is performed within the constraints of the rover’s instrumentation. The Microscopic Imager allows the identification of particles ≥ 100 µm across. Individual particles of clay, silt and very fine sand are not accessible. Texture is, thus, defined here as the relative proportion of particles ≥ 100 µm. Analytical methods are consistent with standard sedimentologic techniques applied to the study of thin sections and digital images on terrestrial soils. Those have known constraints and biases and are well adapted to the limitations of documenting three-dimensional particles on Mars through the two-dimensional FoV of the MI. Biases and errors are linked to instrument resolution and particle size. Precision improves with increasing size and is unlikely to be consistent in the study of composite soil samples. Here, we address how to obtain a statistically sound and accurate representation of individual particles and soil mixings without analyzing entire MI images. The objectives are to (a) understand the role of particle-size in selecting statistically

  15. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  16. Wndchrm – an open source utility for biological image analysis

    PubMed Central

    Shamir, Lior; Orlov, Nikita; Eckley, D Mark; Macura, Tomasz; Johnston, Josiah; Goldberg, Ilya G

    2008-01-01

    Background Biological imaging is an emerging field, covering a wide range of applications in biological and clinical research. However, while machinery for automated experimenting and data acquisition has been developing rapidly in the past years, automated image analysis often introduces a bottleneck in high content screening. Methods Wndchrm is an open source utility for biological image analysis. The software works by first extracting image content descriptors from the raw image, image transforms, and compound image transforms. Then, the most informative features are selected, and the feature vector of each image is used for classification and similarity measurement. Results Wndchrm has been tested using several publicly available biological datasets, and provided results which are favorably comparable to the performance of task-specific algorithms developed for these datasets. The simple user interface allows researchers who are not knowledgeable in computer vision methods and have no background in computer programming to apply image analysis to their data. Conclusion We suggest that wndchrm can be effectively used for a wide range of biological image analysis tasks. Using wndchrm can allow scientists to perform automated biological image analysis while avoiding the costly challenge of implementing computer vision and pattern recognition algorithms. PMID:18611266

  17. Image analysis of chest radiographs. Final report

    SciTech Connect

    Hankinson, J.L.

    1982-06-01

    The report demonstrates the feasibility of using a computer for automated interpretation of chest radiographs for pneumoconiosis. The primary goal of this project was to continue testing and evaluating the prototype system with a larger set of films. After review of the final contract report and a review of the current literature, it was clear that several modifications to the prototype system were needed before the project could continue. These modifications can be divided into two general areas. The first area was in improving the stability of the system and compensating for the diversity of film quality which exists in films obtained in a surveillance program. Since the system was to be tested with a large number of films, it was impractical to be extremely selective of film quality. The second area is in terms of processing time. With a large set of films, total processing time becomes much more significant. An image display was added to the system so that the computer determined lung boundaries could be verified for each film. A film handling system was also added, enabling the system to scan films continuously without attendance.

  18. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  19. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  20. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  1. Improving Sensitivity in Ultrasound Molecular Imaging by Tailoring Contrast Agent Size Distribution: In Vivo Studies

    PubMed Central

    Streeter, Jason E.; Gessner, Ryan; Miles, Iman; Dayton, Paul A.

    2010-01-01

    Molecular imaging with ultrasound relies on microbubble contrast agents (MCAs) selectively adhering to a ligand-specific target. Prior studies have shown that only small quantities of microbubbles are retained at their target sites, therefore, enhancing contrast sensitivity to low concentrations of microbubbles is essential to improve molecular imaging techniques. In order to assess the effect of MCA diameter on imaging sensitivity, perfusion and molecular imaging studies were performed with microbubbles of varying size distributions. To assess signal improvement and MCA circulation time as a function of size and concentration, blood perfusion was imaged in rat kidneys using nontargeted size-sorted MCAs with a Siemens Sequoia ultrasound system (Siemans, Mountain View, CA) in cadence pulse sequencing (CPS) mode. Molecular imaging sensitivity improvements were studied with size-sorted αvβ3-targeted bubbles in both fibrosarcoma and R3230 rat tumor models. In perfusion imaging studies, video intensity and contrast persistence was ≈8 times and ≈3 times greater respectively, for “sorted 3-micron” MCAs (diameter, 3.3 ± 1.95 μm) when compared to “unsorted” MCAs (diameter, 0.9 ± 0.45 μm) at low concentrations. In targeted experiments, application of sorted 3-micron MCAs resulted in a ≈20 times video intensity increase over unsorted populations. Tailoring size-distributions results in substantial imaging sensitivity improvement over unsorted populations, which is essential in maximizing sensitivity to small numbers of MCAs for molecular imaging. PMID:20236606

  2. Advanced image analysis for the preservation of cultural heritage

    NASA Astrophysics Data System (ADS)

    France, Fenella G.; Christens-Barry, William; Toth, Michael B.; Boydston, Kenneth

    2010-02-01

    The Library of Congress' Preservation Research and Testing Division has established an advanced preservation studies scientific program for research and analysis of the diverse range of cultural heritage objects in its collection. Using this system, the Library is currently developing specialized integrated research methodologies for extending preservation analytical capacities through non-destructive hyperspectral imaging of cultural objects. The research program has revealed key information to support preservation specialists, scholars and other institutions. The approach requires close and ongoing collaboration between a range of scientific and cultural heritage personnel - imaging and preservation scientists, art historians, curators, conservators and technology analysts. A research project of the Pierre L'Enfant Plan of Washington DC, 1791 had been undertaken to implement and advance the image analysis capabilities of the imaging system. Innovative imaging options and analysis techniques allow greater processing and analysis capacities to establish the imaging technique as the first initial non-invasive analysis and documentation step in all cultural heritage analyses. Mapping spectral responses, organic and inorganic data, topography semi-microscopic imaging, and creating full spectrum images have greatly extended this capacity from a simple image capture technique. Linking hyperspectral data with other non-destructive analyses has further enhanced the research potential of this image analysis technique.

  3. Improved and robust detection of cell nuclei from four dimensional fluorescence images.

    PubMed

    Bashar, Md Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J

    2014-01-01

    Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9% over the previous methods

  4. Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images

    PubMed Central

    Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.

    2014-01-01

    Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods

  5. Improving deep subwavelength imaging through terminal interface design of metallo-dielectric multilayered stacks

    NASA Astrophysics Data System (ADS)

    Hu, Jigang; Zhan, Qiwen; Chen, Junxue; Wang, Xiangxian; Lu, Yonghua; Ming, Hai

    2013-01-01

    Symmetric metallo-dielectric multilayered stacks (MDMS) are investigated to improve the spatial resolution of subwavelength imaging operated in canalization regime. Simulation results revealed that subwavelength imaging capability is very sensitive to the thickness and material of the MDMS terminal layers. Furthermore, the coupling and decoupling of the Bloch modes in MDMS, between the object and image space, strongly depend on the terminal layer parameters which can be tuned to achieve the optimal imaging improvement. In contrast to metal-dielectric periodic MDMS, using MDMS with the developed symmetric surface termination, subwavelength imaging with optimal intensity throughput and improved field spatial resolution (˜20.4%) can be obtained. Moreover, optical singularity, in the form of Poynting vector saddle point, has been found in the free space after lens exit for the two kinds of symmetric MDMS that exhibit improved superresolution imaging performance with 100% energy flux visibility. The improved subwavelength imaging capabilities, offered by this proposed termination design method, may find potential applications in the areas of biological imaging, sensing, and deep subwavelength lithography, and many others.

  6. Improved Shear Wave Motion Detection Using Pulse-Inversion Harmonic Imaging With a Phased Array Transducer.

    PubMed

    Pengfei Song; Heng Zhao; Urban, Matthew W; Manduca, Armando; Pislaru, Sorin V; Kinnick, Randall R; Pislaru, Cristina; Greenleaf, James F; Shigao Chen

    2013-12-01

    Ultrasound tissue harmonic imaging is widely used to improve ultrasound B-mode imaging quality thanks to its effectiveness in suppressing imaging artifacts associated with ultrasound reverberation, phase aberration, and clutter noise. In ultrasound shear wave elastography (SWE), because the shear wave motion signal is extracted from the ultrasound signal, these noise sources can significantly deteriorate the shear wave motion tracking process and consequently result in noisy and biased shear wave motion detection. This situation is exacerbated in in vivo SWE applications such as heart, liver, and kidney. This paper, therefore, investigated the possibility of implementing harmonic imaging, specifically pulse-inversion harmonic imaging, in shear wave tracking, with the hypothesis that harmonic imaging can improve shear wave motion detection based on the same principles that apply to general harmonic B-mode imaging. We first designed an experiment with a gelatin phantom covered by an excised piece of pork belly and show that harmonic imaging can significantly improve shear wave motion detection by producing less underestimated shear wave motion and more consistent shear wave speed measurements than fundamental imaging. Then, a transthoracic heart experiment on a freshly sacrificed pig showed that harmonic imaging could robustly track the shear wave motion and give consistent shear wave speed measurements of the left ventricular myocardium while fundamental imaging could not. Finally, an in vivo transthoracic study of seven healthy volunteers showed that the proposed harmonic imaging tracking sequence could provide consistent estimates of the left ventricular myocardium stiffness in end-diastole with a general success rate of 80% and a success rate of 93.3% when excluding the subject with Body Mass Index higher than 25. These promising results indicate that pulse-inversion harmonic imaging can significantly improve shear wave motion tracking and thus potentially

  7. Improvement of retinal blood vessel detection using morphological component analysis.

    PubMed

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result.

  8. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  9. A new reconstruction strategy for image improvement in pinhole SPECT.

    PubMed

    Zeniya, Tsutomu; Watabe, Hiroshi; Aoi, Toshiyuki; Kim, Kyeong Min; Teramoto, Noboru; Hayashi, Takuya; Sohlberg, Antti; Kudo, Hiroyuki; Iida, Hidehiro

    2004-08-01

    Pinhole single-photon emission computed tomography (SPECT) is able to provide information on the biodistribution of several radioligands in small laboratory animals, but has limitations associated with non-uniform spatial resolution or axial blurring. We have hypothesised that this blurring is due to incompleteness of the projection data acquired by a single circular pinhole orbit, and have evaluated a new strategy for accurate image reconstruction with better spatial resolution uniformity. A pinhole SPECT system using two circular orbits and a dedicated three-dimensional ordered subsets expectation maximisation (3D-OSEM) reconstruction method were developed. In this system, not the camera but the object rotates, and the two orbits are at 90 degrees and 45 degrees relative to the object's axis. This system satisfies Tuy's condition, and is thus able to provide complete data for 3D pinhole SPECT reconstruction within the whole field of view (FOV). To evaluate this system, a series of experiments was carried out using a multiple-disk phantom filled with 99mTc solution. The feasibility of the proposed method for small animal imaging was tested with a mouse bone study using 99mTc-hydroxymethylene diphosphonate. Feldkamp's filtered back-projection (FBP) method and the 3D-OSEM method were applied to these data sets, and the visual and statistical properties were examined. Axial blurring, which was still visible at the edge of the FOV even after applying the conventional 3D-OSEM instead of FBP for single-orbit data, was not visible after application of 3D-OSEM using two-orbit data. 3D-OSEM using two-orbit data dramatically reduced the resolution non-uniformity and statistical noise, and also demonstrated considerably better image quality in the mouse scan. This system may be of use in quantitative assessment of bio-physiological functions in small animals.

  10. Analysis of Anechoic Chamber Testing of the Hurricane Imaging Radiometer

    NASA Technical Reports Server (NTRS)

    Fenigstein, David; Ruf, Chris; James, Mark; Simmons, David; Miller, Timothy; Buckley, Courtney

    2010-01-01

    The Hurricane Imaging Radiometer System (HIRAD) is a new airborne passive microwave remote sensor developed to observe hurricanes. HIRAD incorporates synthetic thinned array radiometry technology, which use Fourier synthesis to reconstruct images from an array of correlated antenna elements. The HIRAD system response to a point emitter has been measured in an anechoic chamber. With this data, a Fourier inversion image reconstruction algorithm has been developed. Performance analysis of the apparatus is presented, along with an overview of the image reconstruction algorithm

  11. Image analysis to estimate mulch residue in soil.

    PubMed

    Moreno, Carmen; Mancebo, Ignacio; Saa, Antonio; Moreno, Marta M

    2014-01-01

    Mulching is used to improve the condition of agricultural soils by covering the soil with different materials, mainly black polyethylene (PE). However, problems derived from its use are how to remove it from the field and, in the case of it remaining in the soil, the possible effects on it. One possible solution is to use biodegradable plastic (BD) or paper (PP), as mulch, which could present an alternative, reducing nonrecyclable waste and decreasing the environmental pollution associated with it. Determination of mulch residues in the ground is one of the basic requirements to estimate the potential of each material to degrade. This study has the goal of evaluating the residue of several mulch materials over a crop campaign in Central Spain through image analysis. Color images were acquired under similar lighting conditions at the experimental field. Different thresholding methods were applied to binarize the histogram values of the image saturation plane in order to show the best contrast between soil and mulch. Then the percentage of white pixels (i.e., soil area) was used to calculate the mulch deterioration. A comparison of thresholding methods and the different mulch materials based on percentage of bare soil area obtained is shown.

  12. Image Analysis to Estimate Mulch Residue in Soil

    PubMed Central

    Moreno, Carmen; Mancebo, Ignacio; Saa, Antonio; Moreno, Marta M.

    2014-01-01

    Mulching is used to improve the condition of agricultural soils by covering the soil with different materials, mainly black polyethylene (PE). However, problems derived from its use are how to remove it from the field and, in the case of it remaining in the soil, the possible effects on it. One possible solution is to use biodegradable plastic (BD) or paper (PP), as mulch, which could present an alternative, reducing nonrecyclable waste and decreasing the environmental pollution associated with it. Determination of mulch residues in the ground is one of the basic requirements to estimate the potential of each material to degrade. This study has the goal of evaluating the residue of several mulch materials over a crop campaign in Central Spain through image analysis. Color images were acquired under similar lighting conditions at the experimental field. Different thresholding methods were applied to binarize the histogram values of the image saturation plane in order to show the best contrast between soil and mulch. Then the percentage of white pixels (i.e., soil area) was used to calculate the mulch deterioration. A comparison of thresholding methods and the different mulch materials based on percentage of bare soil area obtained is shown. PMID:25309953

  13. Combining 2D synchrosqueezed wave packet transform with optimization for crystal image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Wirth, Benedikt; Yang, Haizhao

    2016-04-01

    We develop a variational optimization method for crystal analysis in atomic resolution images, which uses information from a 2D synchrosqueezed transform (SST) as input. The synchrosqueezed transform is applied to extract initial information from atomic crystal images: crystal defects, rotations and the gradient of elastic deformation. The deformation gradient estimate is then improved outside the identified defect region via a variational approach, to obtain more robust results agreeing better with the physical constraints. The variational model is optimized by a nonlinear projected conjugate gradient method. Both examples of images from computer simulations and imaging experiments are analyzed, with results demonstrating the effectiveness of the proposed method.

  14. Imaging properties and its improvements of scanning/imaging x-ray microscope

    NASA Astrophysics Data System (ADS)

    Takeuchi, Akihisa; Uesugi, Kentaro; Suzuki, Yoshio

    2016-01-01

    A scanning / imaging X-ray microscope (SIXM) system has been developed at SPring-8. The SIXM consists of a scanning X-ray microscope with a one-dimensional (1D) X-ray focusing device and an imaging (full-field) X-ray microscope with a 1D X-ray objective. The motivation of the SIXM system is to realize a quantitative and highly-sensitive multimodal 3D X-ray tomography by taking advantages of both the scanning X-ray microscope using multi-pixel detector and the imaging X-ray microscope. Data acquisition process of a 2D image is completely different between in the horizontal direction and in the vertical direction; a 1D signal is obtained with the linear-scanning while the other dimensional signal is obtained with the imaging optics. Such condition have caused a serious problem on the imaging properties that the imaging quality in the vertical direction has been much worse than that in the horizontal direction. In this paper, two approaches to solve this problem will be presented. One is introducing a Fourier transform method for phase retrieval from one phase derivative image, and the other to develop and employ a 1D diffuser to produce an asymmetrical coherent illumination.

  15. Image processing in remote sensing data analysis - The state of the art

    NASA Technical Reports Server (NTRS)

    Rosenfeld, A.

    1983-01-01

    Image analysis techniques applicable to remote sensing data and covering image models, feature detection, segmentation and classification, texture analysis, and matching are studied. Model types for characterizing images examined include random-field, mosaic, and facet models. Edge and corner detection as well as global extraction of linear features are discussed. Pixel clustering and classification are covered in addition to the regional approach to segmentation. Autocorrelation, second-order gray level probability density, and the use of primitive element statistics are discussed in relation to texture analysis. Finally, reducing the cost of (sub)imaging matching methods (e.g., pixelwise comparison of gray levels and normalized cross-correlation between two images) as well as improving match sharpness is considered.

  16. Slide Set: reproducible image analysis and batch processing with ImageJ

    PubMed Central

    Nanes, Benjamin A.

    2015-01-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets that are common in biology. This paper introduces Slide Set, a framework for reproducible image analysis and batch processing with ImageJ. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  17. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  18. Improvements of 3-D image quality in integral display by reducing distortion errors

    NASA Astrophysics Data System (ADS)

    Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

    2008-02-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

  19. Calculating Contrast Stretching Variables in Order to Improve Dental Radiology Image Quality

    NASA Astrophysics Data System (ADS)

    Widodo, Haris B.; Soelaiman, Arief; Ramadhani, Yogi; Supriyanti, Retno

    2016-01-01

    Teeth are one of the body's digestive tract that serves as a softener food that can be digested easily. One branch of science that was instrumental in the treatment and diagnosis of teeth is Dental Radiology. However, in reality many dental radiology images has low resolution, thus inhibiting in making diagnosis of dental disease perfectly. This research aims to improve low resolution dental radiology image using image processing techniques. This paper discussed the use of contrast stretching method to improve the dental radiology image quality, especially relating to the calculation of the variable contrast stretching method. The results showed that contrast stretching method is promising for use in improving the image quality in a simple but efficient.

  20. Lipid nanoparticle vectorization of indocyanine green improves fluorescence imaging for tumor diagnosis and lymph node resection.

    PubMed

    Navarro, Fabrice P; Berger, Michel; Guillermet, Stéphanie; Josserand, Véronique; Guyon, Laurent; Neumann, Emmanuelle; Vinet, Françoise; Texier, Isabelle

    2012-10-01

    Fluorescence imaging is opening a new era in image-guided surgery and other medical applications. The only FDA approved contrast agent in the near infrared is IndoCyanine Green (ICG), which despites its low toxicity, displays poor chemical and optical properties for long-term and sensitive imaging applications in human. Lipid nanoparticles are investigated for improving ICG optical properties and in vivo fluorescence imaging sensitivity. 30 nm diameter lipid nanoparticles (LNP) are loaded with ICG. Their characterization and use for tumor and lymph node imaging are described. Nano-formulation benefits dye optical properties (6 times improved brightness) and chemical stability (>6 months at 4 degrees C in aqueous buffer). More importantly, LNP vectorization allows never reported sensitive and prolonged (>1 day) labeling of tumors and lymph nodes. Composed of human-use approved ingredients, this novel ICG nanometric formulation is foreseen to expand rapidly the field of clinical fluorescence imaging applications.

  1. Optical implementation of improved resolution with intermediate-view reconstruction technique based on integral imaging

    NASA Astrophysics Data System (ADS)

    Lee, Keong-Jin; Lee, Sang-Tae; Oh, Yong-Seok; Hong, Suk-Pyo; Kim, Chang-Keun; Kim, Eun-Soo

    2008-02-01

    To overcome the viewing resolution limit defined by the Nyquist sampling theorem for a given lenslet pitch, a Moving Array-Lens Technique (MALT) was developed in 3-D integral imaging technique. Even though the MALT is an effective method for resolution improvement of Integral Imaging, this cannot be applied to a real-time 3-D integral imaging display system because of its mechanical movement. In this paper, we propose an integral imaging display using a computational pick-up method based on Intermediate-View Reconstruction Technique instead of optical moving pickup. We show that the proposed system can provide optically resolution-improved 3-D images of integral imaging by use of EIs generated by the IVRT through the optical experiments.

  2. Method of improving image sharpness for annular-illumination scanning electron microscopes

    NASA Astrophysics Data System (ADS)

    Enyama, Momoyo; Hamada, Koichi; Fukuda, Muneyuki; Kazumi, Hideyuki

    2016-06-01

    Annular illumination is effective in enhancing the depth of focus for scanning electron microscopes (SEMs). However, owing to high side lobes of the point-spread function (PSF), annular illumination results in poor image sharpness. The conventional deconvolution method, which converts the PSF to a delta function, can improve image sharpness, but results in artifacts due to noise amplification. In this paper, we propose an image processing method that can reduce the deterioration of image sharpness. With this method, the PSF under annular illumination is converted to that under standard illumination. Through simulations, we verified that the image sharpness of SEM images under annular illumination with the proposed method can be improved without noise amplification.

  3. Improving image quality in compressed ultrafast photography with a space- and intensity-constrained reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.

    2016-03-01

    The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.

  4. Improved forest change detection with terrain illumination corrected landsat images

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An illumination correction algorithm has been developed to improve the accuracy of forest change detection from Landsat reflectance data. This algorithm is based on an empirical rotation model and was tested on the Landsat imagery pair over Cherokee National Forest, Tennessee, Uinta-Wasatch-Cache N...

  5. Image analysis of neuropsychological test responses

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Hiller, Darren L.

    1996-04-01

    This paper reports recent advances in the development of an automated approach to neuropsychological testing. High performance image analysis algorithms have been developed as part of a convenient and non-invasive computer-based system to provide an objective assessment of patient responses to figure-copying tests. Tests of this type are important in determining the neurological function of patients following stroke through evaluation of their visuo-spatial performance. Many conventional neuropsychological tests suffer from the serious drawback that subjective judgement on the part of the tester is required in the measurement of the patient's response which leads to a qualitative neuropsychological assessment that can be both inconsistent and inaccurate. Results for this automated approach are presented for three clinical populations: patients suffering right hemisphere stroke are compared with adults with no known neurological disorder and a population comprising normal school children of 11 years is presented to demonstrate the sensitivity of the technique. As well as providing a more reliable and consistent diagnosis this technique is sufficiently sensitive to monitor a patient's progress over a period of time and will provide the neuropsychologist with a practical means of evaluating the effectiveness of therapy or medication administered as part of a rehabilitation program.

  6. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1994-01-01

    The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

  7. Roles of universal three-dimensional image analysis devices that assist surgical operations.

    PubMed

    Sakamoto, Tsuyoshi

    2014-04-01

    The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse

  8. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    ScienceCinema

    Olivi, Alessandro, M.D.

    2016-07-12

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  9. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    SciTech Connect

    Olivi, Alessandro, M.D.

    2010-08-28

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  10. Improved Speed and Functionality of a 580-GHz Imaging Radar

    NASA Technical Reports Server (NTRS)

    Dengler, Robert; Cooper, Ken; Chattopadhyay, Goutam; Siegel, Peter; Schlecht, Erich; Mehdi, Imran; Skalare, Anders; Gill, John

    2010-01-01

    With this high-resolution imaging radar system, coherent illumination in the 576-to-589-GHz range and phase-sensitive detection are implemented in an all-solid-state design based on Schottky diode sensors and sources. By employing the frequency-modulated, continuous-wave (FMCW) radar technique, centimeter-scale range resolution has been achieved while using fractional bandwidths of less than 3 percent. The high operating frequencies also permit centimeter-scale cross-range resolution at several-meter standoff distances without large apertures. Scanning of a single-pixel transceiver enables targets to be rapidly mapped in three dimensions, so that the technology can be applied to the detection of concealed objects on persons.

  11. Airborne Visible/Infrared Imaging Spectrometer (AVIRIS): Sensor improvements for 1994 and 1995

    NASA Technical Reports Server (NTRS)

    Sarture, C. M.; Chrien, T. G.; Green, R. O.; Eastwood, M. L.; Raney, J. J.; Hernandez, M. A.

    1995-01-01

    AVIRIS is a NASA-sponsored Earth-remote-sensing imaging spectrometer designed, built and operated by the Jet Propulsion Laboratory (JPL). While AVIRIS has been operational since 1989, major improvements have been completed in most of the sensor subsystems during the winter maintenance cycles. As a consequence of these efforts, the capabilities of AVIRIS to reliably acquire and deliver consistently high quality, calibrated imaging spectrometer data continue to improve annually, significantly over those in 1989. Improvements to AVIRIS prior to 1994 have been described previously. This paper details recent and planned improvements to AVIRIS in the sensor task.

  12. Improved classification and visualization of healthy and pathological hard dental tissues by modeling specular reflections in NIR hyperspectral images

    NASA Astrophysics Data System (ADS)

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots, which are difficult to diagnose. Near-infrared (NIR) hyperspectral imaging is a new promising technique for early detection of demineralization which can classify healthy and pathological dental tissues. However, due to non-ideal illumination of the tooth surface the hyperspectral images can exhibit specular reflections, in particular around the edges and the ridges of the teeth. These reflections significantly affect the performance of automated classification and visualization methods. Cross polarized imaging setup can effectively remove the specular reflections, however is due to the complexity and other imaging setup limitations not always possible. In this paper, we propose an alternative approach based on modeling the specular reflections of hard dental tissues, which significantly improves the classification accuracy in the presence of specular reflections. The method was evaluated on five extracted human teeth with corresponding gold standard for 6 different healthy and pathological hard dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized regions. Principal component analysis (PCA) was used for multivariate local modeling of healthy and pathological dental tissues. The classification was performed by employing multiple discriminant analysis. Based on the obtained results we believe the proposed method can be considered as an effective alternative to the complex cross polarized imaging setups.

  13. Hyperspectral image analysis for CARS, SRS, and Raman data

    PubMed Central

    Karuna, Arnica; Borri, Paola; Langbein, Wolfgang

    2015-01-01

    In this work, we have significantly enhanced the capabilities of the hyperspectral image analysis (HIA) first developed by Masia et al. 1 The HIA introduced a method to factorize the hyperspectral data into the product of component concentrations and spectra for quantitative analysis of the chemical composition of the sample. The enhancements shown here comprise (1) a spatial weighting to reduce the spatial variation of the spectral error, which improves the retrieval of the chemical components with significant local but small global concentrations; (2) a new selection criterion for the spectra used when applying sparse sampling2 to speed up sequential hyperspectral imaging; and (3) a filter for outliers in the data using singular value decomposition, suited e.g. to suppress motion artifacts. We demonstrate the enhancements on coherent anti‐Stokes Raman scattering, stimulated Raman scattering, and spontaneous Raman data. We provide the HIA software as executable for public use. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27478301

  14. Monitoring of historical frescoes by timed infrared imaging analysis

    NASA Astrophysics Data System (ADS)

    Cadelano, G.; Bison, P.; Bortolin, A.; Ferrarini, G.; Peron, F.; Girotto, M.; Volinia, M.

    2015-03-01

    The subflorescence and efflorescence phenomena are widely acknowledged as the major causes of permanent damage to fresco wall paintings. They are related to the occurrence of cycles of dry/wet conditions inside the walls. Therefore, it is essential to identify the presence of water on the decorated surfaces and inside the walls. Nondestructive testing in industrial applications have confirmed that active infrared thermography with continuous timed images acquisition can improve the outcomes of thermal analysis aimed to moisture identification. In spite of that, in cultural heritage investigations these techniques have not been yet used extensively on a regular basis. This paper illustrates an application of these principles in order to evaluate the decay of fresco mural paintings in a medieval chapel located in North-West of Italy. One important feature of this study is the use of a robotic system called aIRview that can be utilized to automatically acquire and process thermal images. Multiple accurate thermal views of the inside walls of the building have been produced in a survey that lasted several days. Signal processing algorithms based on Fast Fourier Transform analysis have been applied to the acquired data in order to formulate trustworthy hypotheses about the deterioration mechanisms.

  15. Image analysis for estimating the weight of live animals

    NASA Astrophysics Data System (ADS)

    Schofield, C. P.; Marchant, John A.

    1991-02-01

    Many components of animal production have been automated. For example weighing feeding identification and yield recording on cattle pigs poultry and fish. However some of these tasks still require a considerable degree of human input and more effective automation could lead to better husbandry. For example if the weight of pigs could be monitored more often without increasing labour input then this information could be used to measure growth rates and control fat level allowing accurate prediction of market dates and optimum carcass quality to be achieved with improved welfare at minimum cost. Some aspects of animal production have defied automation. For example attending to the well being of housed animals is the preserve of the expert stockman. He gathers visual data about the animals in his charge (in more plain words goes and looks at their condition and behaviour) and processes this data to draw conclusions and take actions. Automatically collecting data on well being implies that the animals are not disturbed from their normal environment otherwise false conclusions will be drawn. Computer image analysis could provide the data required without the need to disturb the animals. This paper describes new work at the Institute of Engineering Research which uses image analysis to estimate the weight of pigs as a starting point for the wider range of applications which have been identified. In particular a technique has been developed to

  16. Creating the “desired mindset”: Philip Morris’s efforts to improve its corporate image among women

    PubMed Central

    McDaniel, Patricia A.; Malone, Ruth E.

    2009-01-01

    Through analysis of tobacco company documents, we explored how and why Philip Morris sought to enhance its corporate image among American women. Philip Morris regarded women as an influential group. To improve its image among women, while keeping tobacco off their organizational agendas, the company sponsored women’s groups and programs. It also sought to appeal to women it defined as “active moms” by advertising its commitment to domestic violence victims. It was more successful in securing women’s organizations as allies than active moms. Increasing tobacco’s visibility as a global women’s health issue may require addressing industry influence. PMID:19851947

  17. Improving the scalability of hyperspectral imaging applications on heterogeneous platforms using adaptive run-time data compression

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Plaza, Javier; Paz, Abel

    2010-10-01

    Latest generation remote sensing instruments (called hyperspectral imagers) are now able to generate hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. In previous work, we have reported that the scalability of parallel processing algorithms dealing with these high-dimensional data volumes is affected by the amount of data to be exchanged through the communication network of the system. However, large messages are common in hyperspectral imaging applications since processing algorithms are pixel-based, and each pixel vector to be exchanged through the communication network is made up of hundreds of spectral values. Thus, decreasing the amount of data to be exchanged could improve the scalability and parallel performance. In this paper, we propose a new framework based on intelligent utilization of wavelet-based data compression techniques for improving the scalability of a standard hyperspectral image processing chain on heterogeneous networks of workstations. This type of parallel platform is quickly becoming a standard in hyperspectral image processing due to the distributed nature of collected hyperspectral data as well as its flexibility and low cost. Our experimental results indicate that adaptive lossy compression can lead to improvements in the scalability of the hyperspectral processing chain without sacrificing analysis accuracy, even at sub-pixel precision levels.

  18. Improvement of Shear Wave Motion Detection Using Harmonic Imaging in Healthy Human Liver.

    PubMed

    Amador, Carolina; Song, Pengfei; Meixner, Duane D; Chen, Shigao; Urban, Matthew W

    2016-05-01

    Quantification of liver elasticity is a major application of shear wave elasticity imaging (SWEI) to non-invasive assessment of liver fibrosis stages. SWEI measurements can be highly affected by ultrasound image quality. Ultrasound harmonic imaging has exhibited a significant improvement in ultrasound image quality as well as for SWEI measurements. This was previously illustrated in cardiac SWEI. The purpose of this study was to evaluate liver shear wave particle displacement detection and shear wave velocity (SWV) measurements with fundamental and filter-based harmonic ultrasound imaging. In a cohort of 17 patients with no history of liver disease, a 2.9-fold increase in maximum shear wave displacement, a 0.11 m/s decrease in the overall interquartile range and median SWV and a 17.6% increase in the success rate of SWV measurements were obtained when filter-based harmonic imaging was used instead of fundamental imaging.

  19. Improvement of Shear Wave Motion Detection Using Harmonic Imaging in Healthy Human Liver.

    PubMed

    Amador, Carolina; Song, Pengfei; Meixner, Duane D; Chen, Shigao; Urban, Matthew W

    2016-05-01

    Quantification of liver elasticity is a major application of shear wave elasticity imaging (SWEI) to non-invasive assessment of liver fibrosis stages. SWEI measurements can be highly affected by ultrasound image quality. Ultrasound harmonic imaging has exhibited a significant improvement in ultrasound image quality as well as for SWEI measurements. This was previously illustrated in cardiac SWEI. The purpose of this study was to evaluate liver shear wave particle displacement detection and shear wave velocity (SWV) measurements with fundamental and filter-based harmonic ultrasound imaging. In a cohort of 17 patients with no history of liver disease, a 2.9-fold increase in maximum shear wave displacement, a 0.11 m/s decrease in the overall interquartile range and median SWV and a 17.6% increase in the success rate of SWV measurements were obtained when filter-based harmonic imaging was used instead of fundamental imaging. PMID:26803391

  20. Improving Public Perception of Behavior Analysis.

    PubMed

    Freedman, David H

    2016-05-01

    The potential impact of behavior analysis is limited by the public's dim awareness of the field. The mass media rarely cover behavior analysis, other than to echo inaccurate negative stereotypes about control and punishment. The media instead play up appealing but less-evidence-based approaches to problems, a key example being the touting of dubious diets over behavioral approaches to losing excess weight. These sorts of claims distort or skirt scientific evidence, undercutting the fidelity of behavior analysis to scientific rigor. Strategies for better connecting behavior analysis with the public might include reframing the field's techniques and principles in friendlier, more resonant form; pushing direct outcome comparisons between behavior analysis and its rivals in simple terms; and playing up the "warm and fuzzy" side of behavior analysis. PMID:27606184