Science.gov

Sample records for improving image analysis

  1. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  2. Core analysis and CT imaging improve shale completions

    SciTech Connect

    Blauch, M.E.; Venditto, J.J. ); Rothman, E.; Hyde, P. )

    1992-11-16

    To improve hydraulic fracturing efficiency in Devonian shales, core analysis and computerized tomography (CT) can provide data for orienting perforations, determining fracture direction, and selecting deviated well trajectories. This article reports on technology tested in a West Virginia well for improving the economics of developing Devonian shale and other low permeability gas reservoirs. With slight production increase per well, Columbia Natural Resources Inc. (CNR) has determined that marginal gas well payout time can be shortened enough to encourage additional drilling. For eight wells completed by CNR in 1992, the absolute open flow (AOF) averaged 116 Mcfd before stimulation. After stimulation using long-standing fracture stimulation procedures, the AOF averaged 500 Mcfd.

  3. Textural Analysis of Hyperspectral Images for Improving Contaminant Detection Accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Previous studies demonstrated a hyperspectral imaging system has a potential for poultry fecal contaminant detection by measuring reflectance intensity. The simple image ratio at 565 and 517-nm images with optimal thresholding was able to detect fecal contaminants on broiler carcasses with high acc...

  4. Textural Analysis of Hyperspectral Images for Improving Detection Accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Detection of fecal contamination is crucial for food safety to protect consumers from food pathogens. Previous studies demonstrated a hyperspectral imaging system has a potential for poultry fecal contaminant detection by measuring reflectance intensity. The simple image ratio with optimal thresho...

  5. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  6. The influence of bonding agents in improving interactions in composite propellants determined using image analysis.

    PubMed

    Dostanić, J; Husović, T V; Usćumlić, G; Heinemann, R J; Mijin, D

    2008-12-01

    Binder-oxidizer interactions in rocket composite propellants can be improved using adequate bonding agents. In the present work, the effectiveness of different 1,3,5-trisubstituted isocyanurates was determined by stereo and metallographic microscopy and using the software package Image-Pro Plus. The chemical analysis of samples was performed by a scanning electron microscope equipped for energy dispersive spectrometry. PMID:19094035

  7. Improvement and error analysis of quantitative information extraction in diffraction-enhanced imaging

    NASA Astrophysics Data System (ADS)

    Yang, Hao; Xuan, Rui-Jiao; Hu, Chun-Hong; Duan, Jing-Hao

    2014-04-01

    Diffraction-enhanced imaging (DEI) is a powerful phase-sensitive technique that provides higher spatial resolution and supercontrast of weakly absorbing objects than conventional radiography. It derives contrast from the X-ray absorption, refraction, and ultra-small-angle X-ray scattering (USAXS) properties of an object. The separation of different-contrast contributions from images is an important issue for the potential application of DEI. In this paper, an improved DEI (IDEI) method is proposed based on the Gaussian curve fitting of the rocking curve (RC). Utilizing only three input images, the IDEI method can accurately separate the absorption, refraction, and USAXS contrasts produced by the object. The IDEI method can therefore be viewed as an improvement to the extended DEI (EDEI) method. In contrast, the IDEI method can circumvent the limitations of the EDEI method well since it does not impose a Taylor approximation on the RC. Additionally, analysis of the IDEI model errors is performed to further investigate the factors that lead to the image artifacts, and finally validation studies are conducted using computer simulation and synchrotron experimental data.

  8. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  9. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  10. An improved panoramic digital image correlation method for vascular strain analysis and material characterization.

    PubMed

    Genovese, K; Lee, Y-U; Lee, A Y; Humphrey, J D

    2013-11-01

    The full potential of computational models of arterial wall mechanics has yet to be realized primarily because of a lack of data sufficient to quantify regional mechanical properties, especially in genetic, pharmacological, and surgical mouse models that can provide significant new information on the time course of adaptive or maladaptive changes as well as disease progression. The goal of this work is twofold: first, to present modifications to a recently developed panoramic-digital image correlation (p-DIC) system that significantly increase the rate of data acquisition, overall accuracy in specimen reconstruction, and thus full-field strain analysis, and the axial measurement domain for in vitro mechanical tests on excised mouse arteries and, second, to present a new method of data analysis that similarly increases the accuracy in image reconstruction while reducing the associated computational time. The utility of these advances is illustrated by presenting the first full-field strain measurements at multiple distending pressures and axial elongations for a suprarenal mouse aorta before and after exposure to elastase. Such data promise to enable improved inverse characterization of regional material properties using established computational methods. PMID:23290821

  11. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility.

    PubMed

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H

    2016-01-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability. PMID:27581626

  12. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility

    PubMed Central

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H.

    2016-01-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability. PMID:27581626

  13. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  14. Basics of image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  15. Theoretical Analysis of the Sensitivity and Speed Improvement of ISIS over a Comparable Traditional Hyperspectral Imager

    SciTech Connect

    Brian R. Stallard; Stephen M. Gentry

    1998-09-01

    The analysis presented herein predicts that, under signal-independent noise limited conditions, an Information-efficient Spectral Imaging Sensor (ISIS) style hyperspectral imaging system design can obtain significant signal-to-noise ratio (SNR) and speed increase relative to a comparable traditional hyperspectral imaging (HSI) instrument. Factors of forty are reasonable for a single vector, and factors of eight are reasonable for a five-vector measurement. These advantages can be traded with other system parameters in an overall sensor system design to allow a variety of applications to be done that otherwise would be impossible within the constraints of the traditional HSI style design.

  16. Non-rigid registration and non-local principle component analysis to improve electron microscopy spectrum images

    NASA Astrophysics Data System (ADS)

    Yankovich, Andrew B.; Zhang, Chenyu; Oh, Albert; Slater, Thomas J. A.; Azough, Feridoon; Freer, Robert; Haigh, Sarah J.; Willett, Rebecca; Voyles, Paul M.

    2016-09-01

    Image registration and non-local Poisson principal component analysis (PCA) denoising improve the quality of characteristic x-ray (EDS) spectrum imaging of Ca-stabilized Nd2/3TiO3 acquired at atomic resolution in a scanning transmission electron microscope. Image registration based on the simultaneously acquired high angle annular dark field image significantly outperforms acquisition with a long pixel dwell time or drift correction using a reference image. Non-local Poisson PCA denoising reduces noise more strongly than conventional weighted PCA while preserving atomic structure more faithfully. The reliability of and optimal internal parameters for non-local Poisson PCA denoising of EDS spectrum images is assessed using tests on phantom data.

  17. The utility of texture analysis to improve per-pixel classification for CBERS02's CCD image

    NASA Astrophysics Data System (ADS)

    Peng, Guangxiong; He, Yuhua; Li, Jing; Chen, Yunhao; Hu, Deyong

    2006-10-01

    The maximum likelihood classification (MLC) is one of the most popular methods in remote sensing image classification. Because the maximum likelihood classification is based on spectrum of objects, it cannot correctly distinguish objects that have same spectrum and cannot reach the accuracy requirement. In this paper, we take an area of Langfang of Hebei province in China as an example and discuss the method of combining texture of panchromatic image with spectrum to improve the accuracy of CBERS02 CCD image information extraction. Firstly, analysis of the textures of the panchromatic image (CCD5) made by using texture analysis of Gray Level Coocurrence Matrices and statistic index. Then optimal texture window size of angular second moment, contrast, entropy and correlation is obtained according to variation coefficient of each texture measure for each thematic class. The chosen optimal window size is that from which the value of variation coefficient starts to stabilize while having the smallest value. The output images generated by texture analysis are used as additional bands together with other multi-spectral bands(CCD1-4) in classification. Objects that have same spectrums can be distinguished. Finally, the accuracy measurement is compared with the classification based on spectrum only .The result indicates that the objects with same spectrum are distinguished by using texture analysis in image classification, and the spectral /textural combination improves more than spectrum only in classification accuracy.

  18. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  19. Improved Dynamic Analysis method for quantitative PIXE and SXRF element imaging of complex materials

    NASA Astrophysics Data System (ADS)

    Ryan, C. G.; Laird, J. S.; Fisher, L. A.; Kirkham, R.; Moorhead, G. F.

    2015-11-01

    The Dynamic Analysis (DA) method in the GeoPIXE software provides a rapid tool to project quantitative element images from PIXE and SXRF imaging event data both for off-line analysis and in real-time embedded in a data acquisition system. Initially, it assumes uniform sample composition, background shape and constant model X-ray relative intensities. A number of image correction methods can be applied in GeoPIXE to correct images to account for chemical concentration gradients, differential absorption effects, and to correct images for pileup effects. A new method, applied in a second pass, uses an end-member phase decomposition obtained from the first pass, and DA matrices determined for each end-member, to re-process the event data with each pixel treated as an admixture of end-member terms. This paper describes the new method and demonstrates through examples and Monte-Carlo simulations how it better tracks spatially complex composition and background shape while still benefitting from the speed of DA.

  20. Improved triangular prism methods for fractal analysis of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Fung, Tung; Leung, Yee

    2016-05-01

    Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.

  1. Improved factor analysis of dynamic PET images to estimate arterial input function and tissue curves

    NASA Astrophysics Data System (ADS)

    Boutchko, Rostyslav; Mitra, Debasis; Pan, Hui; Jagust, William; Gullberg, Grant T.

    2015-03-01

    Factor analysis of dynamic structures (FADS) is a methodology of extracting time-activity curves (TACs) for corresponding different tissue types from noisy dynamic images. The challenges of FADS include long computation time and sensitivity to the initial guess, resulting in convergence to local minima far from the true solution. We propose a method of accelerating and stabilizing FADS application to sequences of dynamic PET images by adding preliminary cluster analysis of the time activity curves for individual voxels. We treat the temporal variation of individual voxel concentrations as a set of time-series and use a partial clustering analysis to identify the types of voxel TACs that are most functionally distinct from each other. These TACs provide a good initial guess for the temporal factors for subsequent FADS processing. Applying this approach to a set of single slices of dynamic 11C-PIB images of the brain allows identification of the arterial input function and two different tissue TACs that are likely to correspond to the specific and non-specific tracer binding-tissue types. These results enable us to perform direct classification of tissues based on their pharmacokinetic properties in dynamic PET without relying on a compartment-based kinetic model, without identification of the reference region, or without using any external methods of estimating the arterial input function, as needed in some techniques.

  2. Analysis of explosion in enclosure based on improved method of images

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Guo, J.; Yao, X.; Chen, G.; Zhu, X.

    2016-05-01

    The aim of this paper is to present an improved method to calculate the pressure loading on walls during a confined explosion. When an explosion occurs inside of an enclosure, reflected shock waves produce multiple pressure peaks at a given wall location, especially at the corners. The effects of confined blast loading may bring about more serious damage to the structure due to multiple shock reflection. An approach, first proposed by Chan to describe the track of shock waves based on the mirror reflecting theory, using the method of images (MOI) is proposed to simplify internal explosion loading calculations. An improved method of images is proposed that takes into account wall openings and oblique reflections that cannot be considered with the standard MOI. The approach, validated using experimental data, provides a simplified and quick approach for loading calculation of a confined explosion. The results show that the peak overpressure tends to decline as the measurement point moves away from the center, and increases sharply as it approaches the enclosure corners. The specific impulse increases from the center to the corners. The improved method is capable of predicting pressure-time history and impulse with an accuracy comparable to that of three-dimensional AUTODYN code predictions.

  3. An Improved Method for Liver Diseases Detection by Ultrasound Image Analysis

    PubMed Central

    Owjimehr, Mehri; Danyali, Habibollah; Helfroush, Mohammad Sadegh

    2015-01-01

    Ultrasound imaging is a popular and noninvasive tool frequently used in the diagnoses of liver diseases. A system to characterize normal, fatty and heterogeneous liver, using textural analysis of liver Ultrasound images, is proposed in this paper. The proposed approach is able to select the optimum regions of interest of the liver images. These optimum regions of interests are analyzed by two level wavelet packet transform to extract some statistical features, namely, median, standard deviation, and interquartile range. Discrimination between heterogeneous, fatty and normal livers is performed in a hierarchical approach in the classification stage. This stage, first, classifies focal and diffused livers and then distinguishes between fatty and normal ones. Support vector machine and k-nearest neighbor classifiers have been used to classify the images into three groups, and their performance is compared. The Support vector machine classifier outperformed the compared classifier, attaining an overall accuracy of 97.9%, with a sensitivity of 100%, 100% and 95.1% for the heterogeneous, fatty and normal class, respectively. The Acc obtained by the proposed computer-aided diagnostic system is quite promising and suggests that the proposed system can be used in a clinical environment to support radiologists and experts in liver diseases interpretation. PMID:25709938

  4. Improving Three-Dimensional (3D) Range Gated Reconstruction Through Time-of-Flight (TOF) Imaging Analysis

    NASA Astrophysics Data System (ADS)

    Chua, S. Y.; Wang, X.; Guo, N.; Tan, C. S.; Chai, T. Y.; Seet, G. L.

    2016-04-01

    This paper performs an experimental investigation on the TOF imaging profile which strongly influences the quality of reconstruction to accomplish accurate range sensing. From our analysis, the reflected intensity profile recorded appears to deviate from Gaussian model which is commonly assumed and can be perceived as a mixture of noises and actual reflected signal. Noise-weighted Average range calculation is therefore proposed to alleviate noise influence based on the signal detection threshold and system noises. From our experimental result, this alternative range solution demonstrates better accuracy as compared to the conventional weighted average method and proven as a para-axial correction to improve range reconstruction in 3D gated imaging system.

  5. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  6. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  7. Improved structure, function and compatibility for CellProfiler: modular high-throughput image analysis software

    PubMed Central

    Kamentsky, Lee; Jones, Thouis R.; Fraser, Adam; Bray, Mark-Anthony; Logan, David J.; Madden, Katherine L.; Ljosa, Vebjorn; Rueden, Curtis; Eliceiri, Kevin W.; Carpenter, Anne E.

    2011-01-01

    Summary: There is a strong and growing need in the biology research community for accurate, automated image analysis. Here, we describe CellProfiler 2.0, which has been engineered to meet the needs of its growing user base. It is more robust and user friendly, with new algorithms and features to facilitate high-throughput work. ImageJ plugins can now be run within a CellProfiler pipeline. Availability and Implementation: CellProfiler 2.0 is free and open source, available at http://www.cellprofiler.org under the GPL v. 2 license. It is available as a packaged application for Macintosh OS X and Microsoft Windows and can be compiled for Linux. Contact: anne@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21349861

  8. Jeffries Matusita based mixed-measure for improved spectral matching in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Padma, S.; Sanjeevi, S.

    2014-10-01

    This paper proposes a novel hyperspectral matching technique by integrating the Jeffries-Matusita measure (JM) and the Spectral Angle Mapper (SAM) algorithm. The deterministic Spectral Angle Mapper and stochastic Jeffries-Matusita measure are orthogonally projected using the sine and tangent functions to increase their spectral ability. The developed JM-SAM algorithm is implemented in effectively discriminating the landcover classes and cover types in the hyperspectral images acquired by PROBA/CHRIS and EO-1 Hyperion sensors. The reference spectra for different land-cover classes were derived from each of these images. The performance of the proposed measure is compared with the performance of the individual SAM and JM approaches. From the values of the relative spectral discriminatory probability (RSDPB) and relative discriminatory entropy value (RSDE), it is inferred that the hybrid JM-SAM approach results in a high spectral discriminability than the SAM and JM measures. Besides, the use of the improved JM-SAM algorithm for supervised classification of the images results in 92.9% and 91.47% accuracy compared to 73.13%, 79.41%, and 85.69% of minimum-distance, SAM and JM measures. It is also inferred that the increased spectral discriminability of JM-SAM measure is contributed by the JM distance. Further, it is seen that the proposed JM-SAM measure is compatible with varying spectral resolutions of PROBA/CHRIS (62 bands) and Hyperion (242 bands).

  9. Improved texture analysis for automatic detection of tuberculosis (TB) on chest radiographs with bone suppression images

    NASA Astrophysics Data System (ADS)

    Maduskar, Pragnya; Hogeweg, Laurens; Philipsen, Rick; Schalekamp, Steven; van Ginneken, Bram

    2013-03-01

    Computer aided detection (CAD) of tuberculosis (TB) on chest radiographs (CXR) is challenging due to over-lapping structures. Suppression of normal structures can reduce overprojection effects and can enhance the appearance of diffuse parenchymal abnormalities. In this work, we compare two CAD systems to detect textural abnormalities in chest radiographs of TB suspects. One CAD system was trained and tested on the original CXR and the other CAD system was trained and tested on bone suppression images (BSI). BSI were created using a commercially available software (ClearRead 2.4, Riverain Medical). The CAD system is trained with 431 normal and 434 abnormal images with manually outlined abnormal regions. Subtlety rating (1-3) is assigned to each abnormal region, where 3 refers to obvious and 1 refers to subtle abnormalities. Performance is evaluated on normal and abnormal regions from an independent dataset of 900 images. These contain in total 454 normal and 1127 abnormal regions, which are divided into 3 subtlety categories containing 280, 527 and 320 abnormal regions, respectively. For normal regions, original/BSI CAD has an average abnormality score of 0.094+/-0.027/0.085+/-0.032 (p - 5.6×10-19). For abnormal regions, subtlety 1, 2, 3 categories have average abnormality scores for original/BSI of 0.155+/-0.073/0.156+/-0.089 (p = 0.73), 0.194+/-0.086/0.207+/-0.101 (p = 5.7×10-7), 0.225+/-0.119/0.247+/-0.117 (p = 4.4×10-7), respectively. Thus for normal regions, CAD scores slightly decrease when using BSI instead of the original images, and for abnormal regions, the scores increase slightly. We therefore conclude that the use of bone suppression results in slightly but significantly improved automated detection of textural abnormalities in chest radiographs.

  10. An Improved Method for Measuring Quantitative Resistance to the Wheat Pathogen Zymoseptoria tritici Using High-Throughput Automated Image Analysis.

    PubMed

    Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A

    2016-07-01

    Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions. PMID:27050574

  11. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects

    NASA Astrophysics Data System (ADS)

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-01

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  12. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects.

    PubMed

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-21

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI. PMID:26948513

  13. Improvements and artifact analysis in conductivity images using multiple internal electrodes.

    PubMed

    Farooq, Adnan; Tehrani, Joubin Nasehi; McEwan, Alistair Lee; Woo, Eung Je; Oh, Tong In

    2014-06-01

    Electrical impedance tomography is an attractive functional imaging method. It is currently limited in resolution and sensitivity due to the complexity of the inverse problem and the safety limits of introducing current. Recently, internal electrodes have been proposed for some clinical situations such as intensive care or RF ablation. This paper addresses the research question related to the benefit of one or more internal electrodes usage since these are invasive. Internal electrodes would be able to reduce the effect of insulating boundaries such as fat and bone and provide improved internal sensitivity. We found there was a measurable benefit with increased numbers of internal electrodes in saline tanks of a cylindrical and complex shape with up to two insulating boundary gel layers modeling fat and muscle. The internal electrodes provide increased sensitivity to internal changes, thereby increasing the amplitude response and improving resolution. However, they also present an additional challenge of increasing sensitivity to position and modeling errors. In comparison with previous work that used point sources for the internal electrodes, we found that it is important to use a detailed mesh of the internal electrodes with these voxels assigned to the conductivity of the internal electrode and its associated holder. A study of different internal electrode materials found that it is optimal to use a conductivity similar to the background. In the tank with a complex shape, the additional internal electrodes provided more robustness in a ventilation model of the lungs via air filled balloons. PMID:24845453

  14. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. PMID:26804669

  15. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  16. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  17. Improvement of CAT scanned images

    NASA Technical Reports Server (NTRS)

    Roberts, E., Jr.

    1980-01-01

    Digital enhancement procedure improves definition of images. Tomogram is generated from large number of X-ray beams. Beams are collimated and small in diameter. Scanning device passes beams sequentially through human subject at many different angles. Battery of transducers opposite subject senses attenuated signals. Signals are transmitted to computer where they are used in construction of image on transverse plane through body.

  18. Improved defect analysis of Gallium Arsenide solar cells using image enhancement

    NASA Technical Reports Server (NTRS)

    Kilmer, Louis C.; Honsberg, Christiana; Barnett, Allen M.; Phillips, James E.

    1989-01-01

    A new technique has been developed to capture, digitize, and enhance the image of light emission from a forward biased direct bandgap solar cell. Since the forward biased light emission from a direct bandgap solar cell has been shown to display both qualitative and quantitative information about the solar cell's performance and its defects, signal processing techniques can be applied to the light emission images to identify and analyze shunt diodes. Shunt diodes are of particular importance because they have been found to be the type of defect which is likely to cause failure in a GaAs solar cell. The presence of a shunt diode can be detected from the light emission by using a photodetector to measure the quantity of light emitted at various current densities. However, to analyze how the shunt diodes affect the quality of the solar cell the pattern of the light emission must be studied. With the use of image enhancement routines, the light emission can be studied at low light emission levels where shunt diode effects are dominant.

  19. Improved structures of maximally decimated directional filter banks for spatial image analysis.

    PubMed

    Park, Sang-Il; Smith, Mark J T; Mersereau, Russell M

    2004-11-01

    This paper introduces an improved structure for directional filter banks (DFBs) that preserves the visual information in the subband domain. The new structure achieves this outcome while preserving both the efficient polyphase implementation and the exact reconstruction property. The paper outlines a step-by-step framework in which to examine the DFB, and within this framework discusses how, through the insertion of post-sampling matrices, visual distortions can be removed. In addition to the efficient tree structure, attention is given to the form and design of efficient linear phase filters. Most notably, linear phase IIR prototype filters are presented, together with the design details. These filters can enable the DFB to have more than a three-fold improvement in complexity reduction over quadrature mirror filters (QMFs). PMID:15540452

  20. Applying Chemical Imaging Analysis to Improve Our Understanding of Cold Cloud Formation

    NASA Astrophysics Data System (ADS)

    Laskin, A.; Knopf, D. A.; Wang, B.; Alpert, P. A.; Roedel, T.; Gilles, M. K.; Moffet, R.; Tivanski, A.

    2012-12-01

    The impact that atmospheric ice nucleation has on the global radiation budget is one of the least understood problems in atmospheric sciences. This is in part due to the incomplete understanding of various ice nucleation pathways that lead to ice crystal formation from pre-existing aerosol particles. Studies investigating the ice nucleation propensity of laboratory generated particles indicate that individual particle types are highly selective in their ice nucleating efficiency. This description of heterogeneous ice nucleation would present a challenge when applying to the atmosphere which contains a complex mixture of particles. Here, we employ a combination of micro-spectroscopic and optical single particle analytical methods to relate particle physical and chemical properties with observed water uptake and ice nucleation. Field-collected particles from urban environments impacted by anthropogenic and marine emissions and aging processes are investigated. Single particle characterization is provided by computer controlled scanning electron microscopy with energy dispersive analysis of X-rays (CCSEM/EDX) and scanning transmission X-ray microscopy with near edge X-ray absorption fine structure spectroscopy (STXM/NEXAFS). A particle-on-substrate approach coupled to a vapor controlled cooling-stage and a microscope system is applied to determine the onsets of water uptake and ice nucleation including immersion freezing and deposition ice nucleation as a function of temperature (T) as low as 200 K and relative humidity (RH) up to water saturation. We observe for urban aerosol particles that for T > 230 K the oxidation level affects initial water uptake and that subsequent immersion freezing depends on particle mixing state, e.g. by the presence of insoluble particles. For T < 230 K the particles initiate deposition ice nucleation well below the homogeneous freezing limit. Particles collected throughout one day for similar meteorological conditions show very similar

  1. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-07-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.

  2. Enhancement of galaxy images for improved classification

    NASA Astrophysics Data System (ADS)

    Jenkinson, John; Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, the classification accuracy of galaxy images is demonstrated to be improved by enhancing the galaxy images. Galaxy images often contain faint regions that are of similar intensity to stars and the image background, resulting in data loss during background subtraction and galaxy segmentation. Enhancement darkens these faint regions, enabling them to be distinguished from other objects in the image and the image background, relative to their original intensities. The heap transform is employed for the purpose of enhancement. Segmentation then produces a galaxy image which closely resembles the structure of the original galaxy image, and one that is suitable for further processing and classification. 6 Morphological feature descriptors are applied to the segmented images after a preprocessing stage and used to extract the galaxy image structure for use in training the classifier. The support vector machine learning algorithm performs training and validation of the original and enhanced data, and a comparison between the classification accuracy of each data set is included. Principal component analysis is used to compress the data sets for the purpose of classification visualization and a comparison between the reduced and original feature spaces. Future directions for this research include galaxy image enhancement by various methods, and classification performed with the use of a sparse dictionary. Both future directions are introduced.

  3. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  4. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    PubMed Central

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  5. An improved image analysis method for cell counting lends credibility to the prognostic significance of T cells in colorectal cancer.

    PubMed

    Väyrynen, Juha P; Vornanen, Juha O; Sajanti, Sara; Böhm, Jan P; Tuomisto, Anne; Mäkinen, Markus J

    2012-05-01

    Numerous immunohistochemically detectable proteins, such as immune cell surface (CD) proteins, vascular endothelial growth factor, and matrix metalloproteinases, have been proposed as potential prognostic markers in colorectal cancer (CRC) and other malignancies. However, the lack of reproducibility has been a major problem in validating the clinical use of such markers, and this has been attributed to insufficiently robust methods used in immunohistochemical staining or its assessment. In this study, we assessed how computer-assisted image analysis might contribute to the reliable assessment of positive area percentage and immune cell density in CRC specimens, and subsequently, we applied the computer-assisted cell counting method in assessing the prognostic value of T cell infiltration in CRC. The computer-assisted analysis methods were based on separating hematoxylin and diaminobenzidine color layers and then applying a brightness threshold using open source image analysis software ImageJ. We found that computer-based analysis results in a more reproducible assessment of the immune positive area percentage than visual semiquantitative estimation. Computer-assisted immune cell counting was rapid to perform and accurate (Pearson r > 0.96 with exact manual cell counts). Moreover, the computer-assisted determination of peritumoral and stromal T cell density had independent prognostic value. Our results suggest that computer-assisted image analysis, utilizing freely available image analysis software, provides a valuable alternative to semiquantitative assessment of immunohistochemical results in cancer research, as well as in clinical practice. The advantages of using computer-assisted analysis include objectivity, accuracy, reproducibility, and time efficiency. This study supports the prognostic value of assessing T cell infiltration in CRC. PMID:22527018

  6. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  7. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    NASA Astrophysics Data System (ADS)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the

  8. Potential use of combining the diffusion equation with the free Shrödinger equation to improve the Optical Coherence Tomography image analysis

    NASA Astrophysics Data System (ADS)

    Cabrera Fernandez, Delia; Salinas, Harry M.; Somfai, Gabor; Puliafito, Carmen A.

    2006-03-01

    Optical coherence tomography (OCT) is a rapidly emerging medical imaging technology. In ophthalmology, OCT is a powerful tool because it enables visualization of the cross sectional structure of the retina and anterior eye with higher resolutions than any other non-invasive imaging modality. Furthermore, OCT image information can be quantitatively analyzed, enabling objective assessment of features such as macular edema and diabetes retinopathy. We present specific improvements in the quantitative analysis of the OCT system, by combining the diffusion equation with the free Shrödinger equation. In such formulation, important features of the image can be extracted by extending the analysis from the real axis to the complex domain. Experimental results indicate that our proposed novel approach has good performance in speckle noise removal, enhancement and segmentation of the various cellular layers of the retina using the OCT system.

  9. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  10. CovAmCoh-analysis: a method to improve the interpretation of high resolution repeat pass SAR images of urban areas

    NASA Astrophysics Data System (ADS)

    Schulz, Karsten; Boldt, Markus; Thiele, Antje

    2009-09-01

    The main advantages of SAR (Synthetic Aperture Radar) are the availability of data under nearly all weather conditions and its independence from natural illumination. Data can be gathered on demand and exploited to extract the needed information. However, due to the side looking imaging geometry, SAR images are difficult to interpret and there is a need for support of human interpreters by image analysis algorithms. In this paper a method is described to improve and to simplify the interpretation of high resolution repeat pass SAR images. Modern spaceborne SAR sensors provide imagery with high spatial resolution and the same imaging geometry in an equidistant time interval. These repeat pass orbits are e. g. used for interferometric evaluation. The information contained in a repeat pass image pair is visualized by the introduced method so that some basic features can be directly extracted from a color representation of three deduced features. The CoV (Coefficient of Variation), the amplitude and the coherence are calculated and jointly evaluated. The combined evaluation of these features can be used to identify regions dominated by volume scatterers (e. g. leafed vegetation), rough surfaces (e. g. grass, gravel) and smooth surfaces (e. g. streets, parking lots). Additionally the coherence between the two images includes information about changes between the acquisitions. The potential of the CovAmCoh- Analysis is demonstrated and discussed by the evaluation of a TerraSAR-X image pair of the Frankfurt airport. The method shows a simple way to improve the intuitive interpretation by the human interpreter and it is used to improve the classification of some basic urban features.

  11. Electronic image analysis

    NASA Astrophysics Data System (ADS)

    Gahm, J.; Grosskopf, R.; Jaeger, H.; Trautwein, F.

    1980-12-01

    An electronic system for image analysis was developed on the basis of low and medium cost integrated circuits. The printed circuit boards were designed, using the principles of modern digital electronics and data processing. The system consists of modules for automatic, semiautomatic and visual image analysis. They can be used for microscopical and macroscopical observations. Photographs can be evaluated, too. The automatic version is controlled by software modules adapted to various applications. The result is a system for image analysis suitable for many different measurement problems. The features contained in large image areas can be measured. For automatic routine analysis controlled by processing calculators the necessary software and hardware modules are available.

  12. Principal component analysis with pre-normalization improves the signal-to-noise ratio and image quality in positron emission tomography studies of amyloid deposits in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Razifar, Pasha; Engler, Henry; Blomquist, Gunnar; Ringheim, Anna; Estrada, Sergio; Långström, Bengt; Bergström, Mats

    2009-06-01

    This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [11C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([11C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [11C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.

  13. Improvements to a Grating-Based Spectral Imaging Microscope and Its Application to Reflectance Analysis of Blue Pen Inks.

    PubMed

    McMillan, Leilani C; Miller, Kathleen P; Webb, Michael R

    2015-08-01

    A modified design of a chromatically resolved optical microscope (CROMoscope), a grating-based spectral imaging microscope, is described. By altering the geometry and adding a beam splitter, a twisting aberration that was present in the first version of the CROMoscope has been removed. Wavelength adjustment has been automated to decrease analysis time. Performance of the new design in transmission-absorption spectroscopy has been evaluated and found to be generally similar to the performance of the previous design. Spectral bandpass was found to be dependent on the sizes of apertures, and the smallest measured spectral bandpass was 1.8 nm with 1.0 mm diameter apertures. Wavelength was found to be very linear with the sine of the grating angle (R(2) = 0.9999995), and wavelength repeatability was found to be much better than the spectral bandpass. Reflectance spectral imaging with a CROMoscope is reported for the first time, and this reflectance spectral imaging was applied to blue ink samples on white paper. As a proof of concept, linear discriminant analysis was used to classify the inks by brand. In a leave-one-out cross-validation, 97.6% of samples were correctly classified. PMID:26162719

  14. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  15. Suggestions for automatic quantitation of endoscopic image analysis to improve detection of small intestinal pathology in celiac disease patients.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-10-01

    Although many groups have attempted to develop an automated computerized method to detect pathology of the small intestinal mucosa caused by celiac disease, the efforts have thus far failed. This is due in part to the occult presence of the disease. When pathological evidence of celiac disease exists in the small bowel it is visually often patchy and subtle. Due to presence of extraneous substances such as air bubbles and opaque fluids, the use of computerized automation methods have only been partially successful in detecting the hallmarks of the disease in the small intestine-villous atrophy, fissuring, and a mottled appearance. By using a variety of computerized techniques and assigning a weight or vote to each technique, it is possible to improve the detection of abnormal regions which are indicative of celiac disease, and of treatment progress in diagnosed patients. Herein a paradigm is suggested for improving the efficacy of automated methods for measuring celiac disease manifestation in the small intestinal mucosa. The suggestions are applicable to both standard and videocapsule endoscopic imaging, since both methods could potentially benefit from computerized quantitation to improve celiac disease diagnosis. PMID:25976612

  16. Improved scanning laser fundus imaging using polarimetry

    NASA Astrophysics Data System (ADS)

    Bueno, Juan M.; Hunter, Jennifer J.; Cookson, Christopher J.; Kisilak, Marsha L.; Campbell, Melanie C. W.

    2007-05-01

    We present a polarimetric technique to improve fundus images that notably simplifies and extends a previous procedure [Opt. Lett.27, 830 (2002)]. A generator of varying polarization states was incorporated into the illumination path of a confocal scanning laser ophthalmoscope. A series of four images, corresponding to independent incoming polarization states, were recorded. From these images, the spatially resolved elements of the top row of the Mueller matrix were computed. From these elements, images with the highest and lowest quality (according to different image quality metrics) were constructed, some of which provided improved visualization of fundus structures of clinical importance (vessels and optic nerve head). The metric values were better for these constructed images than for the initially recorded images and better than averaged images. Entropy is the metric that is most sensitive to differences in the image quality. Improved visualization of features could aid in the detection, localization, and tracking of ocular disease and may be applicable in other biomedical imaging.

  17. Improving automatic analysis of the electrocardiogram acquired during magnetic resonance imaging using magnetic field gradient artefact suppression.

    PubMed

    Abächerli, Roger; Hornaff, Sven; Leber, Remo; Schmid, Hans-Jakob; Felblinger, Jacques

    2006-10-01

    The electrocardiogram (ECG) used for patient monitoring during magnetic resonance imaging (MRI) unfortunately suffers from severe artefacts. These artefacts are due to the special environment of the MRI. Modeling helped in finding solutions for the suppression of these artefacts superimposed on the ECG signal. After we validated the linear and time invariant model for the magnetic field gradient artefact generation, we applied offline and online filters for their suppression. Wiener filtering (offline) helped in generating reference annotations of the ECG beats. In online filtering, the least-mean-square filter suppressed the magnetic field gradient artefacts before the acquired ECG signal was input to the arrhythmia algorithm. Comparing the results of two runs (one run using online filtering and one run without) to our reference annotations, we found an eminent improvement in the arrhythmia module's performance, enabling reliable patient monitoring and MRI synchronization based on the ECG signal. PMID:17015063

  18. Interferogram conditioning for improved Fourier analysis and application to X-ray phase imaging by grating interferometry.

    PubMed

    Montaux-Lambert, Antoine; Mercère, Pascal; Primot, Jérôme

    2015-11-01

    An interferogram conditioning procedure, for subsequent phase retrieval by Fourier demodulation, is presented here as a fast iterative approach aiming at fulfilling the classical boundary conditions imposed by Fourier transform techniques. Interference fringe patterns with typical edge discontinuities were simulated in order to reveal the edge artifacts that classically appear in traditional Fourier analysis, and were consecutively used to demonstrate the correction efficiency of the proposed conditioning technique. Optimization of the algorithm parameters is also presented and discussed. Finally, the procedure was applied to grating-based interferometric measurements performed in the hard X-ray regime. The proposed algorithm enables nearly edge-artifact-free retrieval of the phase derivatives. A similar enhancement of the retrieved absorption and fringe visibility images is also achieved. PMID:26561119

  19. Multiresponse imaging system design for improved resolution

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1991-01-01

    Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/sq rt A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.

  20. Tissue Doppler Imaging Combined with Advanced 12-Lead ECG Analysis Might Improve Early Diagnosis of Hypertrophic Cardiomyopathy in Childhood

    NASA Technical Reports Server (NTRS)

    Femlund, E.; Schlegel, T.; Liuba, P.

    2011-01-01

    Optimization of early diagnosis of childhood hypertrophic cardiomyopathy (HCM) is essential in lowering the risk of HCM complications. Standard echocardiography (ECHO) has shown to be less sensitive in this regard. In this study, we sought to assess whether spatial QRS-T angle deviation, which has shown to predict HCM in adults with high sensitivity, and myocardial Tissue Doppler Imaging (TDI) could be additional tools in early diagnosis of HCM in childhood. Methods: Children and adolescents with familial HCM (n=10, median age 16, range 5-27 years), and without obvious hypertrophy but with heredity for HCM (n=12, median age 16, range 4-25 years, HCM or sudden death with autopsy-verified HCM in greater than or equal to 1 first-degree relative, HCM-risk) were additionally investigated with TDI and advanced 12-lead ECG analysis using Cardiax(Registered trademark) (IMED Co Ltd, Budapest, Hungary and Houston). Spatial QRS-T angle (SA) was derived from Kors regression-related transformation. Healthy age-matched controls (n=21) were also studied. All participants underwent thorough clinical examination. Results: Spatial QRS-T angle (Figure/ Panel A) and septal E/Ea ratio (Figure/Panel B) were most increased in HCM group as compared to the HCM-risk and control groups (p less than 0.05). Of note, these 2 variables showed a trend toward higher levels in HCM-risk group than in control group (p=0.05 for E/Ea and 0.06 for QRS/T by ANOVA). In a logistic regression model, increased SA and septal E/Ea ratio appeared to significantly predict both the disease (Chi-square in HCM group: 9 and 5, respectively, p less than 0.05 for both) and the risk for HCM (Chi-square in HCM-risk group: 5 and 4 respectively, p less than 0.05 for both), with further increased predictability level when these 2 variables were combined (Chi-square 10 in HCM group, and 7 in HCM-risk group, p less than 0.01 for both). Conclusions: In this small material, Tissue Doppler Imaging and spatial mean QRS-T angle

  1. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging. PMID:26833935

  2. Image Analysis of Foods.

    PubMed

    Russ, John C

    2015-09-01

    The structure of foods, both natural and processed ones, is controlled by many variables ranging from biology to chemistry and mechanical forces. The structure also controls many of the properties of the food, including consumer acceptance, taste, mouthfeel, appearance, and so on, and nutrition. Imaging provides an important tool for measuring the structure of foods. This includes 2-dimensional (2D) images of surfaces and sections, for example, viewed in a microscope, as well as 3-dimensional (3D) images of internal structure as may be produced by confocal microscopy, or computed tomography and magnetic resonance imaging. The use of images also guides robotics for harvesting and sorting. Processing of images may be needed to calibrate colors, reduce noise, enhance detail, and delineate structure and dimensions. Measurement of structural information such as volume fraction and internal surface areas, as well as the analysis of object size, location, and shape in both 2- and 3-dimensional images is illustrated and described, with primary references and examples from a wide range of applications. PMID:26270611

  3. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  4. Improved real-time imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)

    1993-01-01

    An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.

  5. Improved Interactive Medical-Imaging System

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  6. Advanced endoscopic imaging to improve adenoma detection

    PubMed Central

    Neumann, Helmut; Nägel, Andreas; Buda, Andrea

    2015-01-01

    Advanced endoscopic imaging is revolutionizing our way on how to diagnose and treat colorectal lesions. Within recent years a variety of modern endoscopic imaging techniques was introduced to improve adenoma detection rates. Those include high-definition imaging, dye-less chromoendoscopy techniques and novel, highly flexible endoscopes, some of them equipped with balloons or multiple lenses in order to improve adenoma detection rates. In this review we will focus on the newest developments in the field of colonoscopic imaging to improve adenoma detection rates. Described techniques include high-definition imaging, optical chromoendoscopy techniques, virtual chromoendoscopy techniques, the Third Eye Retroscope and other retroviewing devices, the G-EYE endoscope and the Full Spectrum Endoscopy-system. PMID:25789092

  7. Can coffee improve image guidance?

    NASA Astrophysics Data System (ADS)

    Wirz, Raul; Lathrop, Ray A.; Godage, Isuru S.; Burgner-Kahrs, Jessica; Russell, Paul T.; Webster, Robert J.

    2015-03-01

    Anecdotally, surgeons sometimes observe large errors when using image guidance in endonasal surgery. We hypothesize that one contributing factor is the possibility that operating room personnel might accidentally bump the optically tracked rigid body attached to the patient after registration has been performed. In this paper we explore the registration error at the skull base that can be induced by simulated bumping of the rigid body, and find that large errors can occur when simulated bumps are applied to the rigid body. To address this, we propose a new fixation method for the rigid body based on granular jamming (i.e. using particles like ground coffee). Our results show that our granular jamming fixation prototype reduces registration error by 28%-68% (depending on bump direction) in comparison to a standard Brainlab reference headband.

  8. Picosecond Imaging Circuit Analysis

    NASA Astrophysics Data System (ADS)

    Kash, Jeffrey A.

    1998-03-01

    With ever-increasing complexity, probing the internal operation of a silicon IC becomes more challenging. Present methods of internal probing are becoming obsolete. We have discovered that a very weak picosecond pulse of light is emitted by each FET in a CMOS circuit whenever the circuit changes logic state. This pulsed emission can be simultaneously imaged and time resolved, using a technique we have named Picosecond Imaging Circuit Analysis (PICA). With a suitable imaging detector, PICA allows time resolved measurement on thousands of devices simultaneously. Computer videos made from measurements on real IC's will be shown. These videos, along with a more quantitative evaluation of the light emission, permit the complete operation of an IC to be measured in a non-invasive way with picosecond time resolution.

  9. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  10. Digital Image Analysis of Cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  11. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  12. Radar image analysis utilizing junctive image metamorphosis

    NASA Astrophysics Data System (ADS)

    Krueger, Peter G.; Gouge, Sally B.; Gouge, Jim O.

    1998-09-01

    A feasibility study was initiated to investigate the ability of algorithms developed for medical sonogram image analysis, to be trained for extraction of cartographic information from synthetic aperture radar imagery. BioComputer Research Inc. has applied proprietary `junctive image metamorphosis' algorithms to cancer cell recognition and identification in ultrasound prostate images. These algorithms have been shown to support automatic radar image feature detection and identification. Training set images were used to develop determinants for representative point, line and area features, which were used on test images to identify and localize the features of interest. The software is computationally conservative; operating on a PC platform in real time. The algorithms are robust; having applicability to be trained for feature recognition on any digital imagery, not just those formed from reflected energy, such as sonograms and radar images. Applications include land mass characterization, feature identification, target recognition, and change detection.

  13. Improvement of image quality by polarization mixing

    NASA Astrophysics Data System (ADS)

    Kasahara, Ryosuke; Itoh, Izumi; Hirai, Hideaki

    2014-03-01

    Information about the polarization of light is valuable because it contains information about the light source illuminating an object, the illumination angle, and the object material. However, polarization information strongly depends on the direction of the light source, and it is difficult to use a polarization image with various recognition algorithms outdoors because the angle of the sun varies. We propose an image enhancement method for utilizing polarization information in many such situations where the light source is not fixed. We take two approaches to overcome this problem. First, we compute an image that is the combination of a polarization image and the corresponding brightness image. Because of the angle of the light source, the polarization contains no information about some scenes. Therefore, it is difficult to use only polarization information in any scene for applications such as object detection. However, if we use a combination of a polarization image and a brightness image, the brightness image can complement the lack of scene information. The second approach is finding features that depend less on the direction of the light source. We propose a method for extracting scene features based on a calculation of the reflection model including polarization effects. A polarization camera that has micro-polarizers on each pixel of the image sensor was built and used for capturing images. We discuss examples that demonstrate the improved visibility of objects by applying our proposed method to, e.g., the visibility of lane markers on wet roads.

  14. Improved digital breast tomosynthesis images using automated ultrasound

    PubMed Central

    Zhang, Xing; Yuan, Jie; Du, Sidan; Kripfgans, Oliver D.; Wang, Xueding; Carson, Paul L.; Liu, Xiaojun

    2014-01-01

    Purpose: Digital breast tomosynthesis (DBT) offers poor image quality along the depth direction. This paper presents a new method that improves the image quality of DBT considerably through the a priori information from automated ultrasound (AUS) images. Methods: DBT and AUS images of a complex breast-mimicking phantom are acquired by a DBT/AUS dual-modality system. The AUS images are taken in the same geometry as the DBT images and the gradient information of the in-slice AUS images is adopted into the new loss functional during the DBT reconstruction process. The additional data allow for new iterative equations through solving the optimization problem utilizing the gradient descent method. Both visual comparison and quantitative analysis are employed to evaluate the improvement on DBT images. Normalized line profiles of lesions are obtained to compare the edges of the DBT and AUS-corrected DBT images. Additionally, image quality metrics such as signal difference to noise ratio (SDNR) and artifact spread function (ASF) are calculated to quantify the effectiveness of the proposed method. Results: In traditional DBT image reconstructions, serious artifacts can be found along the depth direction (Z direction), resulting in the blurring of lesion edges in the off-focus planes parallel to the detector. However, by applying the proposed method, the quality of the reconstructed DBT images is greatly improved. Visually, the AUS-corrected DBT images have much clearer borders in both in-focus and off-focus planes, fewer Z direction artifacts and reduced overlapping effect compared to the conventional DBT images. Quantitatively, the corrected DBT images have better ASF, indicating a great reduction in Z direction artifacts as well as better Z resolution. The sharper line profiles along the Y direction show enhancement on the edges. Besides, noise is also reduced, evidenced by the obviously improved SDNR values. Conclusions: The proposed method provides great improvement on

  15. Contrast improvement of terahertz images of thin histopathologic sections

    PubMed Central

    Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio

    2011-01-01

    We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast. PMID:21326635

  16. IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.

    2010-06-11

    The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve

  17. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  18. Improving dermoscopy image classification using color constancy.

    PubMed

    Barata, Catarina; Celebi, M Emre; Marques, Jorge S

    2015-05-01

    Robustness is one of the most important characteristics of computer-aided diagnosis systems designed for dermoscopy images. However, it is difficult to ensure this characteristic if the systems operate with multisource images acquired under different setups. Changes in the illumination and acquisition devices alter the color of images and often reduce the performance of the systems. Thus, it is important to normalize the colors of dermoscopy images before training and testing any system. In this paper, we investigate four color constancy algorithms: Gray World, max-RGB, Shades of Gray, and General Gray World. Our results show that color constancy improves the classification of multisource images, increasing the sensitivity of a bag-of-features system from 71.0% to 79.7% and the specificity from 55.2% to 76% using only 1-D RGB histograms as features. PMID:25073179

  19. Improving photoacoustic imaging contrast of brachytherapy seeds

    NASA Astrophysics Data System (ADS)

    Pan, Leo; Baghani, Ali; Rohling, Robert; Abolmaesumi, Purang; Salcudean, Septimiu; Tang, Shuo

    2013-03-01

    Prostate brachytherapy is a form of radiotherapy for treating prostate cancer where the radiation sources are seeds inserted into the prostate. Accurate localization of seeds during prostate brachytherapy is essential to the success of intraoperative treatment planning. The current standard modality used in intraoperative seeds localization is transrectal ultrasound. Transrectal ultrasound, however, suffers in image quality due to several factors such speckle, shadowing, and off-axis seed orientation. Photoacoustic imaging, based on the photoacoustic phenomenon, is an emerging imaging modality. The contrast generating mechanism in photoacoustic imaging is optical absorption that is fundamentally different from conventional B-mode ultrasound which depicts changes in acoustic impedance. A photoacoustic imaging system is developed using a commercial ultrasound system. To improve imaging contrast and depth penetration, absorption enhancing coating is applied to the seeds. In comparison to bare seeds, approximately 18.5 dB increase in signal-to-noise ratio as well as a doubling of imaging depth are achieved. Our results demonstrate that the coating of the seeds can further improve the discernibility of the seeds.

  20. Computational efficiency improvements for image colorization

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Sharma, Gaurav; Aly, Hussein

    2013-03-01

    We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.

  1. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  2. Improved Guided Image Fusion for Magnetic Resonance and Computed Tomography Imaging

    PubMed Central

    Jameel, Amina

    2014-01-01

    Improved guided image fusion for magnetic resonance and computed tomography imaging is proposed. Existing guided filtering scheme uses Gaussian filter and two-level weight maps due to which the scheme has limited performance for images having noise. Different modifications in filter (based on linear minimum mean square error estimator) and weight maps (with different levels) are proposed to overcome these limitations. Simulation results based on visual and quantitative analysis show the significance of proposed scheme. PMID:24695586

  3. Improving Performance During Image-Guided Procedures

    PubMed Central

    Duncan, James R.; Tabriz, David

    2015-01-01

    Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628

  4. Statistical image analysis of longitudinal RAVENS images

    PubMed Central

    Lee, Seonjoo; Zipunnikov, Vadim; Reich, Daniel S.; Pham, Dzung L.

    2015-01-01

    Regional analysis of volumes examined in normalized space (RAVENS) are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM) and longitudinal functional principal component analysis (LFPCA) for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS) disease progression. PMID:26539071

  5. Do photographic images of pain improve communication during pain consultations?

    PubMed Central

    Padfield, Deborah; Zakrzewska, Joanna M; de C Williams, Amanda C

    2015-01-01

    BACKGROUND: Visual images may facilitate the communication of pain during consultations. OBJECTIVES: To assess whether photographic images of pain enrich the content and/or process of pain consultation by comparing patients’ and clinicians’ ratings of the consultation experience. METHODS: Photographic images of pain previously co-created by patients with a photographer were provided to new patients attending pain clinic consultations. Seventeen patients selected and used images that best expressed their pain and were compared with 21 patients who were not shown images. Ten clinicians conducted assessments in each condition. After consultation, patients and clinicians completed ratings of aspects of communication and, when images were used, how they influenced the consultation. RESULTS: The majority of both patients and clinicians reported that images enhanced the consultation. Ratings of communication were generally high, with no differences between those with and without images (with the exception of confidence in treatment plan, which was rated more highly in the image group). However, patients’ and clinicians’ ratings of communication were inversely related only in consultations with images. Methodological shortcomings may underlie the present findings of no difference. It is also possible that using images raised patients’ and clinicians’ expectations and encouraged emotional disclosure, in response to which clinicians were dissatisfied with their performance. CONCLUSIONS: Using images in clinical encounters did not have a negative impact on the consultation, nor did it improve communication or satisfaction. These findings will inform future analysis of behaviour in the video-recorded consultations. PMID:25996763

  6. Improving the Remedial Student's Self Image.

    ERIC Educational Resources Information Center

    Glazier, Teresa Ferster

    Teachers at Western Illinois University help improve the self-image of students in college remedial English courses in the following ways: (1) they try to alleviate students' feelings of failure through such methods as stressing the acceptability of spoken dialects but pointing out the practical need for writing in Standard English, and…

  7. Improving Synthetic Aperture Image by Image Compounding in Beamforming Process

    NASA Astrophysics Data System (ADS)

    Martínez-Graullera, Oscar; Higuti, Ricardo T.; Martín, Carlos J.; Ullate, Luis. G.; Romero, David; Parrilla, Montserrat

    2011-06-01

    In this work, signal processing techniques are used to improve the quality of image based on multi-element synthetic aperture techniques. Using several apodization functions to obtain different side lobes distribution, a polarity function and a threshold criterium are used to develop an image compounding technique. The spatial diversity is increased using an additional array, which generates complementary information about the defects, improving the results of the proposed algorithm and producing high resolution and contrast images. The inspection of isotropic plate-like structures using linear arrays and Lamb waves is presented. Experimental results are shown for a 1-mm-thick isotropic aluminum plate with artificial defects using linear arrays formed by 30 piezoelectric elements, with the low dispersion symmetric mode S0 at the frequency of 330 kHz.

  8. Image improvement and three-dimensional reconstruction using holographic image processing

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.; Halioua, M.; Thon, F.; Willasch, D. H.

    1977-01-01

    Holographic computing principles make possible image improvement and synthesis in many cases of current scientific and engineering interest. Examples are given for the improvement of resolution in electron microscopy and 3-D reconstruction in electron microscopy and X-ray crystallography, following an analysis of optical versus digital computing in such applications.

  9. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  10. Imaging Arrays With Improved Transmit Power Capability

    PubMed Central

    Zipparo, Michael J.; Bing, Kristin F.; Nightingale, Kathy R.

    2010-01-01

    Bonded multilayer ceramics and composites incorporating low-loss piezoceramics have been applied to arrays for ultrasound imaging to improve acoustic transmit power levels and to reduce internal heating. Commercially available hard PZT from multiple vendors has been characterized for microstructure, ability to be processed, and electroacoustic properties. Multilayers using the best materials demonstrate the tradeoffs compared with the softer PZT5-H typically used for imaging arrays. Three-layer PZT4 composites exhibit an effective dielectric constant that is three times that of single layer PZT5H, a 50% higher mechanical Q, a 30% lower acoustic impedance, and only a 10% lower coupling coefficient. Application of low-loss multilayers to linear phased and large curved arrays results in equivalent or better element performance. A 3-layer PZT4 composite array achieved the same transmit intensity at 40% lower transmit voltage and with a 35% lower face temperature increase than the PZT-5 control. Although B-mode images show similar quality, acoustic radiation force impulse (ARFI) images show increased displacement for a given drive voltage. An increased failure rate for the multilayers following extended operation indicates that further development of the bond process will be necessary. In conclusion, bonded multilayer ceramics and composites allow additional design freedom to optimize arrays and improve the overall performance for increased acoustic output while maintaining image quality. PMID:20875996

  11. Improved image guidance of coronary stent deployment

    NASA Astrophysics Data System (ADS)

    Close, Robert A.; Abbey, Craig K.; Whiting, James S.

    2000-04-01

    Accurate placement and expansion of coronary stents is hindered by the fact that most stents are only slightly radiopaque, and hence difficult to see in a typical coronary x-rays. We propose a new technique for improved image guidance of multiple coronary stents deployment using layer decomposition of cine x-ray images of stented coronary arteries. Layer decomposition models the cone-beam x-ray projections through the chest as a set of superposed layers moving with translation, rotation, and scaling. Radiopaque markers affixed to the guidewire or delivery balloon provide a trackable feature so that the correct vessel motion can be measured for layer decomposition. In addition to the time- averaged layer image, we also derive a background-subtracted image sequence which removes moving background structures. Layer decomposition of contrast-free vessels can be used to guide placement of multiple stents and to assess uniformity of stent expansion. Layer decomposition of contrast-filled vessels can be used to measure residual stenosis to determine the adequacy of stent expansion. We demonstrate that layer decomposition of a clinical cine x-ray image sequence greatly improves the visibility of a previously deployed stent. We show that layer decomposition of contrast-filled vessels removes background structures and reduces noise.

  12. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur (Inventor); Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  13. Scale-Specific Multifractal Medical Image Analysis

    PubMed Central

    Braverman, Boris

    2013-01-01

    Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

  14. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. PMID:27503078

  15. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  16. An improved SIFT algorithm based on KFDA in image registration

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Lijuan; Huo, Jinfeng

    2016-03-01

    As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.

  17. Improved Chen-Smith image coder

    NASA Astrophysics Data System (ADS)

    Rubino, Eduardo M.; de Queiroz, Ricardo L.; Malvar, Henrique S.

    1995-04-01

    A new transform coder based on the zonal sampling strategy, which outperforms the JPEG baseline coder with comparable computational complexity, is presented. The primary transform used is the 8- x 8-pixel-block discrete cosine transform, although it can be replaced by other transforms, such as the lapped orthogonal transform, without any change in the algorithm. This coder is originally based on the Chen-Smith coder; therefore we call it an improved Chen-Smith (ICS) coder. However, because many new features were incorporated in this improved version, it largely outperforms its predecessor. Key approaches in the ICS coder, such as a new quantizer design, arithmetic coders, noninteger bit-rate allocation, decimalized variance maps, distance-based block classification, and human visual sensitivity weighting, are essential for its high performance, Image compression programs were developed and applied to several test images. The results show that the ICS performs substantially better than the JPEG coder.

  18. [An improved medical image fusion algorithm and quality evaluation].

    PubMed

    Chen, Meiling; Tao, Ling; Qian, Zhiyu

    2009-08-01

    Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform. PMID:19813594

  19. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  20. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  1. Perceived Image Quality Improvements from the Application of Image Deconvolution to Retinal Images from an Adaptive Optics Fundus Imager

    NASA Astrophysics Data System (ADS)

    Soliz, P.; Nemeth, S. C.; Erry, G. R. G.; Otten, L. J.; Yang, S. Y.

    Aim: The objective of this project was to apply an image restoration methodology based on wavefront measurements obtained with a Shack-Hartmann sensor and evaluating the restored image quality based on medical criteria.Methods: Implementing an adaptive optics (AO) technique, a fundus imager was used to achieve low-order correction to images of the retina. The high-order correction was provided by deconvolution. A Shack-Hartmann wavefront sensor measures aberrations. The wavefront measurement is the basis for activating a deformable mirror. Image restoration to remove remaining aberrations is achieved by direct deconvolution using the point spread function (PSF) or a blind deconvolution. The PSF is estimated using measured wavefront aberrations. Direct application of classical deconvolution methods such as inverse filtering, Wiener filtering or iterative blind deconvolution (IBD) to the AO retinal images obtained from the adaptive optical imaging system is not satisfactory because of the very large image size, dificulty in modeling the system noise, and inaccuracy in PSF estimation. Our approach combines direct and blind deconvolution to exploit available system information, avoid non-convergence, and time-consuming iterative processes. Results: The deconvolution was applied to human subject data and resulting restored images compared by a trained ophthalmic researcher. Qualitative analysis showed significant improvements. Neovascularization can be visualized with the adaptive optics device that cannot be resolved with the standard fundus camera. The individual nerve fiber bundles are easily resolved as are melanin structures in the choroid. Conclusion: This project demonstrated that computer-enhanced, adaptive optic images have greater detail of anatomical and pathological structures.

  2. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version

  3. Hyperspectral image analysis. A tutorial.

    PubMed

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-10-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. PMID:26481986

  4. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  5. Improved Imaging Resolution in Desorption Electrospray Ionization Mass Spectrometry

    SciTech Connect

    Kertesz, Vilmos; Van Berkel, Gary J

    2008-01-01

    Imaging resolution of desorption electrospray ionization mass spectrometry (DESI-MS) was investigated using printed patterns on paper and thin-layer chromatography (TLC) plate surfaces. Resolution approaching 40 m was achieved with a typical DESI-MS setup, which is approximately 5 times better than the best resolution reported previously. This improvement was accomplished with careful control of operational parameters (particularly spray tip-to-surface distance, solvent flow rate, and spacing of lane scans). Also, an appropriately strong analyte/surface interaction and uniform surface texture on the size scale no larger that the desired imaging resolution were required to achieve this resolution. Overall, conditions providing the smallest possible effective desorption/ionization area in the DESI impact plume region and minimizing the analyte redistribution on the surface during analysis led to the improved DESI-MS imaging resolution.

  6. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  7. Breast cancer histopathology image analysis: a review.

    PubMed

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients. PMID:24759275

  8. Automatic identification of ROI in figure images toward improving hybrid (text and image) biomedical document retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Rahman, Md Mahmudur; Govindaraju, Venu; Thoma, George R.

    2011-01-01

    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. They appear in specialized databases or in biomedical publications and are not meaningfully retrievable using primarily textbased retrieval systems. The task of automatically finding the images in an article that are most useful for the purpose of determining relevance to a clinical situation is quite challenging. An approach is to automatically annotate images extracted from scientific publications with respect to their usefulness for CDS. As an important step toward achieving the goal, we proposed figure image analysis for localizing pointers (arrows, symbols) to extract regions of interest (ROI) that can then be used to obtain meaningful local image content. Content-based image retrieval (CBIR) techniques can then associate local image ROIs with identified biomedical concepts in figure captions for improved hybrid (text and image) retrieval of biomedical articles. In this work we present methods that make robust our previous Markov random field (MRF)-based approach for pointer recognition and ROI extraction. These include use of Active Shape Models (ASM) to overcome problems in recognizing distorted pointer shapes and a region segmentation method for ROI extraction. We measure the performance of our methods on two criteria: (i) effectiveness in recognizing pointers in images, and (ii) improved document retrieval through use of extracted ROIs. Evaluation on three test sets shows 87% accuracy in the first criterion. Further, the quality of document retrieval using local visual features and text is shown to be better than using visual features alone.

  9. Depth-based selective image reconstruction using spatiotemporal image analysis

    NASA Astrophysics Data System (ADS)

    Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu

    1999-03-01

    In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.

  10. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  11. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  12. Improved estimation of parametric images of cerebral glucose metabolic rate from dynamic FDG-PET using volume-wise principle component analysis

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoqian; Tian, Jie; Chen, Zhe

    2010-03-01

    Parametric images can represent both spatial distribution and quantification of the biological and physiological parameters of tracer kinetics. The linear least square (LLS) method is a well-estimated linear regression method for generating parametric images by fitting compartment models with good computational efficiency. However, bias exists in LLS-based parameter estimates, owing to the noise present in tissue time activity curves (TTACs) that propagates as correlated error in the LLS linearized equations. To address this problem, a volume-wise principal component analysis (PCA) based method is proposed. In this method, firstly dynamic PET data are properly pre-transformed to standardize noise variance as PCA is a data driven technique and can not itself separate signals from noise. Secondly, the volume-wise PCA is applied on PET data. The signals can be mostly represented by the first few principle components (PC) and the noise is left in the subsequent PCs. Then the noise-reduced data are obtained using the first few PCs by applying 'inverse PCA'. It should also be transformed back according to the pre-transformation method used in the first step to maintain the scale of the original data set. Finally, the obtained new data set is used to generate parametric images using the linear least squares (LLS) estimation method. Compared with other noise-removal method, the proposed method can achieve high statistical reliability in the generated parametric images. The effectiveness of the method is demonstrated both with computer simulation and with clinical dynamic FDG PET study.

  13. Image Analysis in Surgical Pathology.

    PubMed

    Lloyd, Mark C; Monaco, James P; Bui, Marilyn M

    2016-06-01

    Digitization of glass slides of surgical pathology samples facilitates a number of value-added capabilities beyond what a pathologist could previously do with a microscope. Image analysis is one of the most fundamental opportunities to leverage the advantages that digital pathology provides. The ability to quantify aspects of a digital image is an extraordinary opportunity to collect data with exquisite accuracy and reliability. In this review, we describe the history of image analysis in pathology and the present state of technology processes as well as examples of research and clinical use. PMID:27241112

  14. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  15. Improvement of the detection rate in digital watermarked images against image degradation caused by image processing

    NASA Astrophysics Data System (ADS)

    Nishio, Masato; Ando, Yutaka; Tsukamoto, Nobuhiro; Kawashima, Hironao; Nakamura, Shinya

    2004-04-01

    In the current environment of medical information disclosure, the general-purpose image format such as JPEG/BMP which does not require special software for viewing, is suitable for carrying and managing medical image information individually. These formats have no way to know patient and study information. We have therefore developed two kinds of ID embedding methods: one is Bit-swapping method for embedding Alteration detection ID and the other is data-imposing method in Fourier domain using Discrete Cosine Transform (DCT) for embedding Original image source ID. We then applied these two digital watermark methods to four modality images (Chest X-ray, Head CT, Abdomen CT, Bone scintigraphy). However, there were some cases where the digital watermarked ID could not be detected correctly due to image degradation caused by image processing. In this study, we improved the detection rate in digital watermarked image using several techniques, which are Error correction method, Majority correction method, and Scramble location method. We applied these techniques to digital watermarked images against image processing (Smoothing) and evaluated the effectiveness. As a result, Majority correction method is effective to improve the detection rate in digital watermarked image against image degradation.

  16. Figure content analysis for improved biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Apostolova, Emilia; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2009-01-01

    Biomedical images are invaluable in medical education and establishing clinical diagnosis. Clinical decision support (CDS) can be improved by combining biomedical text with automatically annotated images extracted from relevant biomedical publications. In a previous study we reported 76.6% accuracy using supervised machine learning on the feasibility of automatically classifying images by combining figure captions and image content for usefulness in finding clinical evidence. Image content extraction is traditionally applied on entire images or on pre-determined image regions. Figure images articles vary greatly limiting benefit of whole image extraction beyond gross categorization for CDS due to the large variety. However, text annotations and pointers on them indicate regions of interest (ROI) that are then referenced in the caption or discussion in the article text. We have previously reported 72.02% accuracy in text and symbols localization but we failed to take advantage of the referenced image locality. In this work we combine article text analysis and figure image analysis for localizing pointer (arrows, symbols) to extract ROI pointed that can then be used to measure meaningful image content and associate it with the identified biomedical concepts for improved (text and image) content-based retrieval of biomedical articles. Biomedical concepts are identified using National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. Our methods report an average precision and recall of 92.3% and 75.3%, respectively on identifying pointing symbols in images from a randomly selected image subset made available through the ImageCLEF 2008 campaign.

  17. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  18. Improved MCA-TV algorithm for interference hyperspectral image decomposition

    NASA Astrophysics Data System (ADS)

    Wen, Jia; Zhao, Junsuo; Cailing, Wang

    2015-12-01

    The technology of interference hyperspectral imaging, which can get the spectral and spatial information of the observed targets, is a very powerful technology in the field of remote sensing. Due to the special imaging principle, there are many position-fixed interference fringes in each frame of the interference hyperspectral image (IHI) data. This characteristic will affect the result of compressed sensing theory and traditional compression algorithms used on IHI data. According to this characteristic of the IHI data, morphological component analysis (MCA) is adopted to separate the interference fringes layers and the background layers of the LSMIS (Large Spatially Modulated Interference Spectral Image) data, and an improved MCA and Total Variation (TV) combined algorithm is proposed in this paper. An update mode of the threshold in traditional MCA is proposed, and the traditional TV algorithm is also improved according to the unidirectional characteristic of the interference fringes in IHI data. The experimental results prove that the proposed improved MCA-TV (IMT) algorithm can get better results than the traditional MCA, and also can meet the convergence conditions much faster than the traditional MCA.

  19. Anmap: Image and data analysis

    NASA Astrophysics Data System (ADS)

    Alexander, Paul; Waldram, Elizabeth; Titterington, David; Rees, Nick

    2014-11-01

    Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.

  20. Analysis of image quality based on perceptual preference

    NASA Astrophysics Data System (ADS)

    Xue, Liqin; Hua, Yuning; Zhao, Guangzhou; Qi, Yaping

    2007-11-01

    This paper deals with image quality analysis considering the impact of psychological factors involved in assessment. The attributes of image quality requirement were partitioned according to the visual perception characteristics and the preference of image quality were obtained by the factor analysis method. The features of image quality which support the subjective preference were identified, The adequacy of image is evidenced to be the top requirement issues to the display image quality improvement. The approach will be beneficial to the research of the image quality subjective quantitative assessment method.

  1. Image analysis and quantitative morphology.

    PubMed

    Mandarim-de-Lacerda, Carlos Alberto; Fernandes-Santos, Caroline; Aguila, Marcia Barbosa

    2010-01-01

    Quantitative studies are increasingly found in the literature, particularly in the fields of development/evolution, pathology, and neurosciences. Image digitalization converts tissue images into a numeric form by dividing them into very small regions termed picture elements or pixels. Image analysis allows automatic morphometry of digitalized images, and stereology aims to understand the structural inner three-dimensional arrangement based on the analysis of slices showing two-dimensional information. To quantify morphological structures in an unbiased and reproducible manner, appropriate isotropic and uniform random sampling of sections, and updated stereological tools are needed. Through the correct use of stereology, a quantitative study can be performed with little effort; efficiency in stereology means as little counting as possible (little work), low cost (section preparation), but still good accuracy. This short text provides a background guide for non-expert morphologists. PMID:19960334

  2. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  3. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  4. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  5. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  6. Hybrid µCT-FMT imaging and image analysis

    PubMed Central

    Zafarnia, Sara; Babler, Anne; Jahnen-Dechent, Willi; Lammers, Twan; Lederle, Wiltrud; Kiessling, Fabian

    2015-01-01

    Fluorescence-mediated tomography (FMT) enables longitudinal and quantitative determination of the fluorescence distribution in vivo and can be used to assess the biodistribution of novel probes and to assess disease progression using established molecular probes or reporter genes. The combination with an anatomical modality, e.g., micro computed tomography (µCT), is beneficial for image analysis and for fluorescence reconstruction. We describe a protocol for multimodal µCT-FMT imaging including the image processing steps necessary to extract quantitative measurements. After preparing the mice and performing the imaging, the multimodal data sets are registered. Subsequently, an improved fluorescence reconstruction is performed, which takes into account the shape of the mouse. For quantitative analysis, organ segmentations are generated based on the anatomical data using our interactive segmentation tool. Finally, the biodistribution curves are generated using a batch-processing feature. We show the applicability of the method by assessing the biodistribution of a well-known probe that binds to bones and joints. PMID:26066033

  7. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  8. Improving resolution of optical coherence tomography for imaging of microstructures

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Lu, Hui; Wang, James H.; Wang, Michael R.

    2015-03-01

    Multi-frame superresolution technique has been used to improve the lateral resolution of spectral domain optical coherence tomography (SD-OCT) for imaging of 3D microstructures. By adjusting the voltages applied to ? and ? galvanometer scanners in the measurement arm, small lateral imaging positional shifts have been introduced among different C-scans. Utilizing the extracted ?-? plane en face image frames from these specially offset C-scan image sets at the same axial position, we have reconstructed the lateral high resolution image by the efficient multi-frame superresolution technique. To further improve the image quality, we applied the latest K-SVD and bilateral total variation denoising algorithms to the raw SD-OCT lateral images before and along with the superresolution processing, respectively. The performance of the SD-OCT of improved lateral resolution is demonstrated by 3D imaging a microstructure fabricated by photolithography and a double-layer microfluidic device.

  9. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  10. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    PubMed Central

    Hu, Yuping; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  11. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  12. Method for improving visualization of infrared images

    NASA Astrophysics Data System (ADS)

    Cimbalista, Mario

    2014-05-01

    Thermography has an extremely important difference from the other visual image converting electronic systems, like XRays or ultrasound: the infrared camera operator usually spend hour after hour with his/her eyes looking only at infrared images, sometimes several intermittent hours a day if not six or more continuous hours. This operational characteristic has a very important impact on yield, precision, errors and misinterpretation of the infrared images contents. Despite a great hardware development over the last fifty years, quality infrared thermography still lacks for a solution for these problems. The human eye physiology has not evolved to see infrared radiation neither the mind-brain has the capability to understand and decode infrared information. Chemical processes inside the human eye and functional cells distributions as well as cognitive-perceptual impact of images plays a crucial role in the perception, detection, and other steps of dealing with infrared images. The system presented here, called ThermoScala and patented in USA solves this problem using a coding process applicable to an original infrared image, generated from any value matrix, from any kind of infrared camera to make it much more suitable for human usage, causing a substantial difference in the way the retina and the brain processes the resultant images. The result obtained is a much less exhaustive way to see, identify and interpret infrared images generated by any infrared camera that uses this conversion process.

  13. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  14. Improved cancer diagnostics by different image processing techniques on OCT images

    NASA Astrophysics Data System (ADS)

    Kanawade, Rajesh; Lengenfelder, Benjamin; Marini Menezes, Tassiana; Hohmann, Martin; Kopfinger, Stefan; Hohmann, Tim; Grabiec, Urszula; Klämpfl, Florian; Gonzales Menezes, Jean; Waldner, Maximilian; Schmidt, Michael

    2015-07-01

    Optical-coherence tomography (OCT) is a promising non-invasive, high-resolution imaging modality which can be used for cancer diagnosis and its therapeutic assessment. However, speckle noise makes detection of cancer boundaries and image segmentation problematic and unreliable. Therefore, to improve the image analysis for a precise cancer border detection, the performance of different image processing algorithms such as mean, median, hybrid median filter and rotational kernel transformation (RKT) for this task is investigated. This is done on OCT images acquired from an ex-vivo human cancerous mucosa and in vitro by using cultivated tumour applied on organotypical hippocampal slice cultures. The preliminary results confirm that the border between the healthy and the cancer lesions can be identified precisely. The obtained results are verified with fluorescence microscopy. This research can improve cancer diagnosis and the detection of borders between healthy and cancerous tissue. Thus, it could also reduce the number of biopsies required during screening endoscopy by providing better guidance to the physician.

  15. IMPROVED BACKGROUND SUBTRACTION FOR THE SLOAN DIGITAL SKY SURVEY IMAGES

    SciTech Connect

    Blanton, Michael R.; Kazin, Eyal; Muna, Demitri; Weaver, Benjamin A.; Price-Whelan, Adrian

    2011-07-15

    We describe a procedure for background subtracting Sloan Digital Sky Survey (SDSS) imaging that improves the resulting detection and photometry of large galaxies on the sky. Within each SDSS drift scan run, we mask out detected sources and then fit a smooth function to the variation of the sky background. This procedure has been applied to all SDSS-III Data Release 8 images, and the results are available as part of that data set. We have tested the effect of our background subtraction on the photometry of large galaxies by inserting fake galaxies into the raw pixels, reanalyzing the data, and measuring them after background subtraction. Our technique results in no size-dependent bias in galaxy fluxes up to half-light radii r{sub 50} {approx} 100 arcsec; in contrast, for galaxies of that size the standard SDSS photometric catalog underestimates fluxes by about 1.5 mag. Our results represent a substantial improvement over the standard SDSS catalog results and should form the basis of any analysis of nearby galaxies using the SDSS imaging data.

  16. An improved fusion algorithm for infrared and visible images based on multi-scale transform

    NASA Astrophysics Data System (ADS)

    Li, He; Liu, Lei; Huang, Wei; Yue, Chao

    2016-01-01

    In this paper, an improved fusion algorithm for infrared and visible images based on multi-scale transform is proposed. First of all, Morphology-Hat transform is used for an infrared image and a visible image separately. Then two images were decomposed into high-frequency and low-frequency images by contourlet transform (CT). The fusion strategy of high-frequency images is based on mean gradient and the fusion strategy of low-frequency images is based on Principal Component Analysis (PCA). Finally, the final fused image is obtained by using the inverse contourlet transform (ICT). The experiments and results demonstrate that the proposed method can significantly improve image fusion performance, accomplish notable target information and high contrast and preserve rich details information at the same time.

  17. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies. PMID:22872081

  18. Institutional Image: How to Define, Improve, Market It.

    ERIC Educational Resources Information Center

    Topor, Robert S.

    Advice for colleges on how to identify, develop, and communicate a positive image for the institution is offered in this handbook. The use of market research techniques to measure image is discussed along with advice on how to improve an image so that it contributes to a unified marketing plan. The first objective is to create and communicate some…

  19. Improving image segmentation by learning region affinities

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  20. Image Inpainting Methods Evaluation and Improvement

    PubMed Central

    Vreja, Raluca

    2014-01-01

    With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects. PMID:25136700

  1. Image inpainting methods evaluation and improvement.

    PubMed

    Vreja, Raluca; Brad, Remus

    2014-01-01

    With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects. PMID:25136700

  2. Target identification by image analysis.

    PubMed

    Fetz, V; Prochnow, H; Brönstrup, M; Sasse, F

    2016-05-01

    Covering: 1997 to the end of 2015Each biologically active compound induces phenotypic changes in target cells that are characteristic for its mode of action. These phenotypic alterations can be directly observed under the microscope or made visible by labelling structural elements or selected proteins of the cells with dyes. A comparison of the cellular phenotype induced by a compound of interest with the phenotypes of reference compounds with known cellular targets allows predicting its mode of action. While this approach has been successfully applied to the characterization of natural products based on a visual inspection of images, recent studies used automated microscopy and analysis software to increase speed and to reduce subjective interpretation. In this review, we give a general outline of the workflow for manual and automated image analysis, and we highlight natural products whose bacterial and eucaryotic targets could be identified through such approaches. PMID:26777141

  3. An improved algorithm of mask image dodging for aerial image

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi

    2011-12-01

    The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.

  4. Functional magnetic resonance imaging of awake monkeys: some approaches for improving imaging quality

    PubMed Central

    Chen, Gang; Wang, Feng; Dillenburger, Barbara C.; Friedman, Robert M.; Chen, Li M.; Gore, John C.; Avison, Malcolm J.; Roe, Anna W.

    2011-01-01

    Functional magnetic resonance imaging (fMRI), at high magnetic field strength can suffer from serious degradation of image quality because of motion and physiological noise, as well as spatial distortions and signal losses due to susceptibility effects. Overcoming such limitations is essential for sensitive detection and reliable interpretation of fMRI data. These issues are particularly problematic in studies of awake animals. As part of our initial efforts to study functional brain activations in awake, behaving monkeys using fMRI at 4.7T, we have developed acquisition and analysis procedures to improve image quality with encouraging results. We evaluated the influence of two main variables on image quality. First, we show how important the level of behavioral training is for obtaining good data stability and high temporal signal-to-noise ratios. In initial sessions, our typical scan session lasted 1.5 hours, partitioned into short (<10 minutes) runs. During reward periods and breaks between runs, the monkey exhibited movements resulting in considerable image misregistrations. After a few months of extensive behavioral training, we were able to increase the length of individual runs and the total length of each session. The monkey learned to wait until the end of a block for fluid reward, resulting in longer periods of continuous acquisition. Each additional 60 training sessions extended the duration of each session by 60 minutes, culminating, after about 140 training sessions, in sessions that last about four hours. As a result, the average translational movement decreased from over 500 μm to less than 80 μm, a displacement close to that observed in anesthetized monkeys scanned in a 7 T horizontal scanner. Another major source of distortion at high fields arises from susceptibility variations. To reduce such artifacts, we used segmented gradient-echo echo-planar imaging (EPI) sequences. Increasing the number of segments significantly decreased susceptibility

  5. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  6. Grid computing in image analysis

    PubMed Central

    2011-01-01

    Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis. PMID:21516880

  7. Quantitative image analysis of celiac disease

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  8. Advanced machine vision inspection of x-ray images for improved productivity

    SciTech Connect

    Novini, A.

    1995-12-31

    The imaging media has been, for the most part, x-ray sensitive photographic film or film coupled to scintillation screens. Electronic means of imaging through television technology has provided a cost effective, alternative method in many applications for the past three to four decades. Typically, film provides higher resolution and higher interscene dynamic range and mechanical flexibility suitable to image complex shape objects. However, this is offset by a labor intensive task of x-ray photography and processing which eliminate the ability to image in real time. The electronic means are typically achieved through an x-ray source, x-ray to visible light converter, and a television (TV) camera. The images can be displayed on a TV monitor for operator viewing or go to a computer system for capture, image processing, and automatic analysis. Although the present state of the art for electronic x-ray imaging does not quite produce the quality possible through film, it provides a level of flexibility and overall improved productivity not achievable through film. Further, electronic imaging means are improving in image quality as time goes on, and it is expected they will match or surpass film in the next decade. For many industrial applications, the present state of the art in electronic imaging provides more than adequate results. This type of imaging also allows for automatic analysis when practical. The first step in the automatic analysis chain is to obtain the best possible image. This requires an in-depth understanding of the x-ray imaging physics and techniques by the system designer. As previously mentioned, these images may go through some enhancement steps to further improve the detectability of defects. Almost always, image spatial resolution of 512 {times} 512 or greater is used with gray level information content of 8 bits (256 gray levels) or more for preservation of image quality. There are many methods available for analyzing the images.

  9. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  10. Technique for improving solid state mosaic images

    NASA Technical Reports Server (NTRS)

    Saboe, J. M.

    1969-01-01

    Method identifies and corrects mosaic image faults in solid state visual displays and opto-electronic presentation systems. Composite video signals containing faults due to defective sensing elements are corrected by a memory unit that contains the stored fault pattern and supplies the appropriate fault word to the blanking circuit.

  11. Improving quantitative neutron radiography through image restoration

    NASA Astrophysics Data System (ADS)

    Hussey, D. S.; Coakley, K. J.; Baltic, E.; Jacobson, D. L.

    2013-11-01

    Commonly in neutron image experiments, the interpretation of the point spread function (PSF) is limited to describing the achievable spatial resolution in an image. In this article it is shown that for various PSF models, the resulting blurring due to the PSF affects the quantification of the neutron transmission of an object and that the effect is separate from the scattered neutron field from the sample. The effect is observed in several neutron imaging detector configurations using different neutron scintillators and light sensors. In the context of estimation of optical densities with an algorithm that assumes a parallel beam, the effect of blurring fractionates the neutron signal spatially and introduces an effective background that scales with the area of the detector illuminated by neutrons. Examples are provided that demonstrate that the illuminated field of view can alter the observed neutron transmission for nearly purely absorbing objects. It is found that by accurately modeling the PSF, image restoration methods can yield more accurate estimates of the neutron attenuation by an object.

  12. Image enhancement algorithm based on improved lateral inhibition network

    NASA Astrophysics Data System (ADS)

    Yun, Haijiao; Wu, Zhiyong; Wang, Guanjun; Tong, Gang; Yang, Hua

    2016-05-01

    There is often substantial noise and blurred details in the images captured by cameras. To solve this problem, we propose a novel image enhancement algorithm combined with an improved lateral inhibition network. Firstly, we built a mathematical model of a lateral inhibition network in conjunction with biological visual perception; this model helped to realize enhanced contrast and improved edge definition in images. Secondly, we proposed that the adaptive lateral inhibition coefficient adhere to an exponential distribution thus making the model more flexible and more universal. Finally, we added median filtering and a compensation measure factor to build the framework with high pass filtering functionality thus eliminating image noise and improving edge contrast, addressing problems with blurred image edges. Our experimental results show that our algorithm is able to eliminate noise and the blurring phenomena, and enhance the details of visible and infrared images.

  13. Scanning probe image wizard: A toolbox for automated scanning probe microscopy data analysis

    NASA Astrophysics Data System (ADS)

    Stirling, Julian; Woolley, Richard A. J.; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  14. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  15. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  16. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  17. Study on the improvement of overall optical image quality via digital image processing

    NASA Astrophysics Data System (ADS)

    Tsai, Cheng-Mu; Fang, Yi Chin; Lin, Yu Chin

    2008-12-01

    This paper studies the effects of improving overall optical image quality via Digital Image Processing (DIP) and compares the promoted optical image with the non-processed optical image. Seen from the optical system, the improvement of image quality has a great influence on chromatic aberration and monochromatic aberration. However, overall image capture systems-such as cellphones and digital cameras-include not only the basic optical system but also many other factors, such as the electronic circuit system, transducer system, and so forth, whose quality can directly affect the image quality of the whole picture. Therefore, in this thesis Digital Image Processing technology is utilized to improve the overall image. It is shown via experiments that system modulation transfer function (MTF) based on the proposed DIP technology and applied to a comparatively bad optical system can be comparable to, even possibly superior to, the system MTF derived from a good optical system.

  18. Breast tomosynthesis imaging configuration analysis.

    PubMed

    Rayford, Cleveland E; Zhou, Weihua; Chen, Ying

    2013-01-01

    Traditional two-dimensional (2D) X-ray mammography is the most commonly used method for breast cancer diagnosis. Recently, a three-dimensional (3D) Digital Breast Tomosynthesis (DBT) system has been invented, which is likely to challenge the current mammography technology. The DBT system provides stunning 3D information, giving physicians increased detail of anatomical information, while reducing the chance of false negative screening. In this research, two reconstruction algorithms, Back Projection (BP) and Shift-And-Add (SAA), were used to investigate and compare View Angle (VA) and the number of projection images (N) with parallel imaging configurations. In addition, in order to better determine which method displayed better-quality imaging, Modulation Transfer Function (MTF) analyses were conducted with both algorithms, ultimately producing results which improve upon better breast cancer detection. Research studies find evidence that early detection of the disease is the best way to conquer breast cancer, and earlier detection results in the increase of life span for the affected person. PMID:23900440

  19. Improved image quality for x-ray CT imaging of gel dosimeters

    SciTech Connect

    Kakakhel, M. B.; Kairn, T.; Kenny, J.; Trapp, J. V.

    2011-09-15

    Purpose: This study provides a simple method for improving precision of x-ray computed tomography (CT) scans of irradiated polymer gel dosimetry. The noise affecting CT scans of irradiated gels has been an impediment to the use of clinical CT scanners for gel dosimetry studies. Methods: In this study, it is shown that multiple scans of a single PAGAT gel dosimeter can be used to extrapolate a ''zero-scan'' image which displays a similar level of precision to an image obtained by averaging multiple CT images, without the compromised dose measurement resulting from the exposure of the gel to radiation from the CT scanner. Results: When extrapolating the zero-scan image, it is shown that exponential and simple linear fits to the relationship between Hounsfield unit and scan number, for each pixel in the image, provide an accurate indication of gel density. Conclusions: It is expected that this work will be utilized in the analysis of three-dimensional gel volumes irradiated using complex radiotherapy treatments.

  20. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  1. Imaging system design for improved information capacity

    NASA Technical Reports Server (NTRS)

    Fales, C. L.; Huck, F. O.; Samms, R. W.

    1984-01-01

    Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.

  2. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  3. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    SciTech Connect

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  4. Imaging quality full chip verification for yield improvement

    NASA Astrophysics Data System (ADS)

    Yang, Qing; Zhou, CongShu; Quek, ShyueFong; Lu, Mark; Foong, YeeMei; Qiu, JianHong; Pandey, Taksh; Dover, Russell

    2013-04-01

    Basic image intensity parameters, like maximum and minimum intensity values (Imin and Imax), image logarithm slope (ILS), normalized image logarithm slope (NILS) and mask error enhancement factor (MEEF) , are well known as indexes of photolithography imaging quality. For full chip verification, hotspot detection is typically based on threshold values for line pinching or bridging. For image intensity parameters it is generally harder to quantify an absolute value to define where the process limit will occur, and at which process stage; lithography, etch or post- CMP. However it is easy to conclude that hot spots captured by image intensity parameters are more susceptible to process variation and very likely to impact yield. In addition these image intensity hot spots can be missed by using resist model verification because the resist model normally is calibrated by the wafer data on a single resist plane and is an empirical model which is trying to fit the resist critical dimension by some mathematic algorithm with combining optical calculation. Also at resolution enhancement technology (RET) development stage, full chip imaging quality check is also a method to qualify RET solution, like Optical Proximity Correct (OPC) performance. To add full chip verification using image intensity parameters is also not as costly as adding one more resist model simulation. From a foundry yield improvement and cost saving perspective, it is valuable to quantify the imaging quality to find design hot spots to correctly define the inline process control margin. This paper studies the correlation between image intensity parameters and process weakness or catastrophic hard failures at different process stages. It also demonstrated how OPC solution can improve full chip image intensity parameters. Rigorous 3D resist profile simulation across the full height of the resist stack was also performed to identify a correlation to the image intensity parameter. A methodology of post-OPC full

  5. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  6. Failure Analysis for Improved Reliability

    NASA Technical Reports Server (NTRS)

    Sood, Bhanu

    2016-01-01

    Outline: Section 1 - What is reliability and root cause? Section 2 - Overview of failure mechanisms. Section 3 - Failure analysis techniques (1. Non destructive analysis techniques, 2. Destructive Analysis, 3. Materials Characterization). Section 4 - Summary and Closure

  7. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  8. Improved vector quantization scheme for grayscale image compression

    NASA Astrophysics Data System (ADS)

    Hu, Y.-C.; Chen, W.-L.; Lo, C.-C.; Chuang, J.-C.

    2012-06-01

    This paper proposes an improved image coding scheme based on vector quantization. It is well known that the image quality of a VQ-compressed image is poor when a small-sized codebook is used. In order to solve this problem, the mean value of the image block is taken as an alternative block encoding rule to improve the image quality in the proposed scheme. To cut down the storage cost of compressed codes, a two-stage lossless coding approach including the linear prediction technique and the Huffman coding technique is employed in the proposed scheme. The results show that the proposed scheme achieves better image qualities than vector quantization while keeping low bit rates.

  9. Improving the Quality of Imaging in the Emergency Department.

    PubMed

    Blackmore, C Craig; Castro, Alexandra

    2015-12-01

    Imaging is critical for the care of emergency department (ED) patients. However, much of the imaging performed for acute care today is overutilization, creating substantial cost without significant benefit. Further, the value of imaging is not easily defined, as imaging only affects outcomes indirectly, through interaction with treatment. Improving the quality, including appropriateness, of emergency imaging requires understanding of how imaging contributes to patient care. The six-tier efficacy hierarchy of Fryback and Thornbury enables understanding of the value of imaging on multiple levels, ranging from technical efficacy to medical decision-making and higher-level patient and societal outcomes. The imaging efficacy hierarchy also allows definition of imaging quality through the Institute of Medicine (IOM)'s quality domains of safety, effectiveness, patient-centeredness, timeliness, efficiency, and equitability and provides a foundation for quality improvement. In this article, the authors elucidate the Fryback and Thornbury framework to define the value of imaging in the ED and to relate emergency imaging to the IOM quality domains. PMID:26568040

  10. Improved Imaging With Laser-Induced Eddy Currents

    NASA Technical Reports Server (NTRS)

    Chern, Engmin J.

    1993-01-01

    System tests specimen of material nondestructively by laser-induced eddy-current imaging improved by changing method of processing of eddy-current signal. Changes in impedance of eddy-current coil measured in absolute instead of relative units.

  11. Improved biliary detection and diagnosis through intelligent machine analysis.

    PubMed

    Logeswaran, Rajasvaran

    2012-09-01

    This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. PMID:21194781

  12. Photoacoustic imaging with rotational compounding for improved signal detection

    NASA Astrophysics Data System (ADS)

    Forbrich, A.; Heinmiller, A.; Jose, J.; Needles, A.; Hirson, D.

    2015-03-01

    Photoacoustic microscopy with linear array transducers enables fast two-dimensional, cross-sectional photoacoustic imaging. Unfortunately, most ultrasound transducers are only sensitive to a very narrow angular acceptance range and preferentially detect signals along the main axis of the transducer. This often limits photoacoustic microscopy from detecting blood vessels which can extend in any direction. Rotational compounded photoacoustic imaging is introduced to overcome the angular-dependency of detecting acoustic signals with linear array transducers. An integrate system is designed to control the image acquisition using a linear array transducer, a motorized rotational stage, and a motorized lateral stage. Images acquired at multiple angular positions are combined to form a rotational compounded image. We found that the signal-to-noise ratio improved, while the sidelobe and reverberation artifacts were substantially reduced. Furthermore, the rotational compounded images of excised kidneys and hindlimb tumors of mice showed more structural information compared with any single image collected.

  13. High resolution PET breast imager with improved detection efficiency

    DOEpatents

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  14. Instrumentation for Improvement of Gas Imaging Systems

    NASA Astrophysics Data System (ADS)

    Happer, William

    2002-08-01

    Funds from the AFOSR:DURIP grant F49620-01-1-0254 have been used to purchase three major pieces of equipment: (1) a nuclear magnetic resonance-spectrometer; system for studies of the basic-physics of hyperpolarized.Xe-129 and He-3 gases; (2) a 9.4 T superconducting magnet with a 3 inch room temperature bore; (3) a Verdi diode-pumped Nd:YAG laser to replace the very expensive argon ion laser we have traditionally used for pumping our Ti:sapphire tunable laser. This new equipment has greatly improved the research productivity of our laboratory.

  15. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  16. Image segmentation using an improved differential algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Hao; Shi, Yujiao; Wu, Dongmei

    2014-10-01

    Among all the existing segmentation techniques, the thresholding technique is one of the most popular due to its simplicity, robustness, and accuracy (e.g. the maximum entropy method, Otsu's method, and K-means clustering). However, the computation time of these algorithms grows exponentially with the number of thresholds due to their exhaustive searching strategy. As a population-based optimization algorithm, differential algorithm (DE) uses a population of potential solutions and decision-making processes. It has shown considerable success in solving complex optimization problems within a reasonable time limit. Thus, applying this method into segmentation algorithm should be a good choice during to its fast computational ability. In this paper, we first propose a new differential algorithm with a balance strategy, which seeks a balance between the exploration of new regions and the exploitation of the already sampled regions. Then, we apply the new DE into the traditional Otsu's method to shorten the computation time. Experimental results of the new algorithm on a variety of images show that, compared with the EA-based thresholding methods, the proposed DE algorithm gets more effective and efficient results. It also shortens the computation time of the traditional Otsu method.

  17. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  18. Improved colorization for night vision system based on image splitting

    NASA Astrophysics Data System (ADS)

    Ali, E.; Kozaitis, S. P.

    2015-03-01

    The success of a color night navigation system often depends on the accuracy of the colors in the resulting image. Often, small regions can incorrectly adopt the color of large regions simply due to size of the regions. We presented a method to improve the color accuracy of a night navigation system by initially splitting a fused image into two distinct sections before colorization. We split a fused image into two sections, generally road and sky regions, before colorization and processed them separately to obtain improved color accuracy of each region. Using this approach, small regions were colored correctly when compared to not separating regions.

  19. Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1997-01-01

    The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results

  20. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  1. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  2. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original

  3. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  4. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  5. An improved optical identity authentication system with significant output images

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Liu, Ming-tang; Yao, Shu-xia; Xin, Yan-hui

    2012-06-01

    An improved method for optical identity authentication system with significant output images is proposed. In this method, a predefined image is digitally encoded into two phase-masks relating to a fixed phase-mask, and this fixed phase-mask acts as a lock to the system. When the two phase-masks, serving as the key, are presented to the system, the predefined image is generated at the output. In addition to simple verification, our method is capable of identifying the type of input phase-mask, and the duties of identity verification and recognition are separated and, respectively, assigned to the amplitude and phase of the output image. Numerical simulation results show that our proposed method is feasible and the output image with better image quality can be obtained.

  6. Resolution and quantitative accuracy improvements in ultrasound transmission imaging

    NASA Astrophysics Data System (ADS)

    Chenevert, T. L.

    The type of ultrasound transmission imaging, referred to as ultrasonic computed tomography (UCT), reconstructs distributions of tissue speed of sound and sound attenuation properties from measurements of acoustic pulse time of flight (TCF) and energy received through tissue. Although clinical studies with experimental UCT scanners have demonstrated UCT is sensitive to certain tissue pathologies not easily detected with conventional ultrasound imaging, they have also shown UCT to suffer from artifacts due to physical differences between the acoustic beam and its ray model implicit in image reconstruction algorithms. Artifacts are expressed as large quantitative errors in attenuation images, and poor spatial resolution and size distortion (exaggerated size of high speed of sound regions) in speed of sound images. Methods are introduced and investigated which alleviate these problems in UCT imaging by providing improved measurements of pulse TCF and energy.

  7. Image analysis applications for grain science

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Steele, James L.

    1991-02-01

    Morphometrical features of single grain kernels or particles were used to discriminate two visibly similar wheat varieties foreign material in wheat hardsoft and spring-winter wheat classes and whole from broken corn kernels. Milled fractions of hard and soft wheat were evaluated using textural image analysis. Color image analysis of sound and mold damaged corn kernels yielded high recognition rates. The studies collectively demonstrate the potential for automated classification and assessment of grain quality using image analysis.

  8. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  9. Restoration Of MEX SRC Images For Improved Topography: A New Image Product

    NASA Astrophysics Data System (ADS)

    Duxbury, T. C.

    2012-12-01

    Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived

  10. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  11. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  12. Improvements in interpretation of posterior capsular opacification (PCO) images

    NASA Astrophysics Data System (ADS)

    Paplinski, Andrew P.; Boyce, James F.; Barman, Sarah A.

    2000-06-01

    We present further improvements to the methods of interpretation of the Posterior Capsular Opacification (PCO) images. These retro-illumination images of the back surface of the implanted lens are used to monitor the state of patient's vision after cataract operation. A common post-surgical complication is opacification of the posterior eye capsule caused by the growth of epithelial cells across the back surface of the capsule. Interpretation of the PCO images is based on their segmentation into transparent image areas and opaque areas, which are affected by the growth of epithelial cells and can be characterized by the increase in the image local variance. This assumption is valid in majority of cases. However, for different materials used for the implanted lenses it sometimes happens that the epithelial cells grow in a way characterized by low variance. In such a case segmentation gives a relatively big error. We describe an application of an anisotropic diffusion equation in a non-linear pre-processing of PCO images. The algorithm preserves the high-variance areas of PCO images and performs a low-pass filtering of small low- variance features. The algorithm maintains a mean value of the variance and guarantees existence of a stable solution and improves segmentation of the PCO images.

  13. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  14. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. PMID:27344937

  15. Flattening filter removal for improved image quality of megavoltage fluoroscopy

    SciTech Connect

    Christensen, James D.; Kirichenko, Alexander; Gayou, Olivier

    2013-08-15

    Purpose: Removal of the linear accelerator (linac) flattening filter enables a high rate of dose deposition with reduced treatment time. When used for megavoltage imaging, an unflat beam has reduced primary beam scatter resulting in sharper images. In fluoroscopic imaging mode, the unflat beam has higher photon count per image frame yielding higher contrast-to-noise ratio. The authors’ goal was to quantify the effects of an unflat beam on the image quality of megavoltage portal and fluoroscopic images.Methods: 6 MV projection images were acquired in fluoroscopic and portal modes using an electronic flat-panel imager. The effects of the flattening filter on the relative modulation transfer function (MTF) and contrast-to-noise ratio were quantified using the QC3 phantom. The impact of FF removal on the contrast-to-noise ratio of gold fiducial markers also was studied under various scatter conditions.Results: The unflat beam had improved contrast resolution, up to 40% increase in MTF contrast at the highest frequency measured (0.75 line pairs/mm). The contrast-to-noise ratio was increased as expected from the increased photon flux. The visualization of fiducial markers was markedly better using the unflat beam under all scatter conditions, enabling visualization of thin gold fiducial markers, the thinnest of which was not visible using the unflat beam.Conclusions: The removal of the flattening filter from a clinical linac leads to quantifiable improvements in the image quality of megavoltage projection images. These gains enable observers to more easily visualize thin fiducial markers and track their motion on fluoroscopic images.

  16. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  17. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  18. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  19. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  20. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  1. Cascaded diffractive optical elements for improved multiplane image reconstruction.

    PubMed

    Gülses, A Alkan; Jenkins, B Keith

    2013-05-20

    Computer-generated phase-only diffractive optical elements in a cascaded setup are designed by one deterministic and one stochastic algorithm for multiplane image formation. It is hypothesized that increasing the number of elements as wavefront modulators in the longitudinal dimension would enlarge the available solution space, thus enabling enhanced image reconstruction. Numerical results show that increasing the number of holograms improves quality at the output. Design principles, computational methods, and specific conditions are discussed. PMID:23736247

  2. Millimeter-wave sensor image analysis

    NASA Technical Reports Server (NTRS)

    Wilson, William J.; Suess, Helmut

    1989-01-01

    Images of an airborne, scanning, radiometer operating at a frequency of 98 GHz, have been analyzed. The mm-wave images were obtained in 1985/1986 using the JPL mm-wave imaging sensor. The goal of this study was to enhance the information content of these images and make their interpretation easier for human analysis. In this paper, a visual interpretative approach was used for information extraction from the images. This included application of nonlinear transform techniques for noise reduction and for color, contrast and edge enhancement. Results of the techniques on selected mm-wave images are presented.

  3. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  4. Structural anisotropy quantification improves the final superresolution image of localization microscopy

    NASA Astrophysics Data System (ADS)

    Wang, Yina; Huang, Zhen-li

    2016-07-01

    Superresolution localization microscopy initially produces a dataset of fluorophore coordinates instead of a conventional digital image. Therefore, superresolution localization microscopy requires additional data analysis to present a final superresolution image. However, methods of employing the structural information within the localization dataset to improve the data analysis performance remain poorly developed. Here, we quantify the structural information in a localization dataset using structural anisotropy, and propose to use it as a figure of merit for localization event filtering. With simulated as well as experimental data of a biological specimen, we demonstrate that exploring structural anisotropy has allowed us to obtain superresolution images with a much cleaner background.

  5. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization. PMID:23931513

  6. Continuous-wave terahertz scanning image resolution analysis and restoration

    NASA Astrophysics Data System (ADS)

    Li, Qi; Yin, Qiguo; Yao, Rui; Ding, Shenghui; Wang, Qi

    2010-03-01

    Resolution of continuous-wave (CW) terahertz scanning image is limited by many factors among which the aperture effect of finite focus diameter is very important. We have investigated the factors that affect terahertz (THz) image resolution in details through theory analysis and simulation. On the other hand, in order to enhance THz image resolution, Richardson-Lucy algorithm has been introduced as a promising approach to improve image details. By analyzing the imaging theory, it is proposed that intensity distribution function of actual THz laser focal spot can be approximatively used as point spread function (PSF) in the restoration algorithm. The focal spot image could be obtained by applying the pyroelectric camera, and mean filtering result of the focal spot image is used as the PSF. Simulation and experiment show that the algorithm implemented is comparatively effective.

  7. A hybrid data-integration scheme for improved subsurface imaging

    NASA Astrophysics Data System (ADS)

    Moreira, L. P.; Friedel, M. J.

    2013-12-01

    A three-step modeling approach is presented for subsurface imaging with sparse and disparate data. First, seismic receiver function and surface wave dispersion data are jointly solved to estimate 1D velocity profiles at multiple locations. Second, a type of unsupervised artificial neural network is used to map the velocity and density profiles to the plane of a magnetotelluric survey. Third, the seismic and magnetotelluric data are jointly solved to improve the lateral and vertical resolution of images. The proposed approach is applied to image structural discontinuities and mineral deposit in Nevada, USA.

  8. An improved image deconvolution approach using local constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Jufeng; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-03-01

    Conventional deblurring approaches such as the Richardson-Lucy (RL) algorithm will introduce strong noise and ringing artifacts, though the point spread function (PSF) is known. Since it is difficult to estimate an accurate PSF in real imaging system, the results of those algorithms will be worse. A spatial weight matrix (SWM) is adopted as local constraint, which is incorporated into image statistical prior to improve the RL approach. Experiments show that our approach can make a good balance between preserving image details and suppressing ringing artifacts and noise.

  9. Computational methods for improving thermal imaging for consumer devices

    NASA Astrophysics Data System (ADS)

    Lynch, Colm N.; Devaney, Nicholas; Drimbarean, Alexandru

    2015-05-01

    In consumer imaging, the spatial resolution of thermal microbolometer arrays is limited by the large physical size of the individual detector elements. This also limits the number of pixels per image. If thermal sensors are to find a place in consumer imaging, as the newly released FLIR One would suggest, this resolution issue must be addressed. Our work focuses on improving the output quality of low resolution thermal cameras through computational means. The method we propose utilises sub-pixel shifts and temporal variations in the scene, using information from thermal and visible channels. Results from simulations and lab experiments are presented.

  10. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Järvi, Timo; Kiuru, Aaro; Kormano, Martti; Svedström, Erkki

    2003-12-01

    The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT) and nuclear medicine (NM) studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  11. Improving analysis of radiochromic films

    NASA Astrophysics Data System (ADS)

    Baptista Neto, A. T.; Meira-Belo, L. C.; Faria, L. O.

    2014-02-01

    An appropriate radiochromic film should respond uniformly throughout its surface after exposure to a uniform radiation field. The evaluation of radiochromic films may be carried out, for example, by measuring color intensities in film-based digitized images. Fluctuations in color intensity may be caused by many different factors such as optical structure of the film's active layer, defects in structure of the film, scratches and external agents, such as dust. The use of high spatial resolution during film scanning should also increase microscopic uniformity. Since the average is strongly influenced by extreme values, the use of other statistical tools, for which this problem becomes inconspicuous, optimizes the application of higher spatial resolution as well as reduces standard deviations. This paper compares the calibration curves of the XR-QA2 Gafchromic® radiochromic film based on three different methods: the average of all color intensity readings, the median of these same readings and the average of readings that fall between the first and third quartiles. Results indicate that a higher spatial resolution may be adopted whenever the calibration curve is based on tools less influenced by extreme values such as those generated by the factors mentioned above.

  12. Analysis on correlation imaging based on fractal interpolation

    NASA Astrophysics Data System (ADS)

    Li, Bailing; Zhang, Wenwen; Chen, Qian; Gu, Guohua

    2015-10-01

    One fractal interpolation algorithm has been discussed in detail and the statistical self-similarity characteristics of light field have been analized in correlated experiment. For the correlation imaging experiment in condition of low sampling frequent, an image analysis approach based on fractal interpolation algorithm is proposed. This approach aims to improve the resolution of original image which contains a fewer number of pixels and highlight the image contour feature which is fuzzy. By using this method, a new model for the light field has been established. For the case of different moments of the intensity in the receiving plane, the local field division also has been established and then the iterated function system based on the experimental data set can be obtained by choosing the appropriate compression ratio under a scientific error estimate. On the basis of the iterative function, an explicit fractal interpolation function expression is given out in this paper. The simulation results show that the correlation image reconstructed by fractal interpolation has good approximations to the original image. The number of pixels of image after interpolation is significantly increased. This method will effectively solve the difficulty of image pixel deficiency and significantly improved the outline of objects in the image. The rate of deviation as the parameter has been adopted in the paper in order to evaluate objectively the effect of the algorithm. To sum up, fractal interpolation method proposed in this paper not only keeps the overall image but also increases the local information of the original image.

  13. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  14. Improved iterative error analysis for endmember extraction from hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Lixin; Zhang, Ying; Guindon, Bert

    2008-08-01

    Automated image endmember extraction from hyperspectral imagery is a challenge and a critical step in spectral mixture analysis (SMA). Over the past years, great efforts were made and a large number of algorithms have been proposed to address this issue. Iterative error analysis (IEA) is one of the well-known existing endmember extraction methods. IEA identifies pixel spectra as a number of image endmembers by an iterative process. In each of the iterations, a fully constrained (abundance nonnegativity and abundance sum-to-one constraints) spectral unmixing based on previously identified endmembers is performed to model all image pixels. The pixel spectrum with the largest residual error is then selected as a new image endmember. This paper proposes an updated version of IEA by making improvements on three aspects of the method. First, fully constrained spectral unmixing is replaced by a weakly constrained (abundance nonnegativity and abundance sum-less-or-equal-to-one constraints) alternative. This is necessary due to the fact that only a subset of endmembers exhibit in a hyperspectral image have been extracted up to an intermediate iteration and the abundance sum-to-one constraint is invalid at the moment. Second, the search strategy for achieving an optimal set of image endmembers is changed from sequential forward selection (SFS) to sequential forward floating selection (SFFS) to reduce the so-called "nesting effect" in resultant set of endmembers. Third, a pixel spectrum is identified as a new image endmember depending on both its spectral extremity in the feature hyperspace of a dataset and its capacity to characterize other mixed pixels. This is achieved by evaluating a set of extracted endmembers using a criterion function, which is consisted of the mean and standard deviation of residual error image. Preliminary comparison between the image endmembers extracted using improved and original IEA are conducted based on an airborne visible infrared imaging

  15. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence.

    PubMed

    Beijbom, Oscar; Treibitz, Tali; Kline, David I; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B Greg; Kriegman, David

    2016-01-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck. PMID:27021133

  16. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    PubMed Central

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-01-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck. PMID:27021133

  17. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    NASA Astrophysics Data System (ADS)

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-03-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck.

  18. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  19. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  20. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  1. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  2. Image accuracy improvements in microwave tomographic thermometry: phantom experience.

    PubMed

    Meaney, P M; Paulsen, K D; Fanning, M W; Li, D; Fang, Q

    2003-01-01

    Evaluation of a laboratory-scale microwave imaging system for non-invasive temperature monitoring has previously been reported with good results in terms of both spatial and temperature resolution. However, a new formulation of the reconstruction algorithm in terms of the log-magnitude and phase of the electric fields has dramatically improved the ability of the system to track the temperature-dependent electrical conductivity distribution. This algorithmic enhancement was originally implemented as a way of improving overall imaging capability in cases of large, high contrast permittivity scatterers, but has also proved to be sensitive to subtle conductivity changes as required in thermal imaging. Additional refinements in the regularization procedure have strengthened the reliability and robustness of image convergence. Imaging experiments were performed for a single heated target consisting of a 5.1 cm diameter PVC tube located within 15 and 25 cm diameter monopole antenna arrays, respectively. The performance of both log-magnitude/phase and complex-valued reconstructions when subjected to four different regularization schemes has been compared based on this experimental data. The results demonstrate a significant accuracy improvement (to 0.2 degrees C as compared with 1.6 degrees C for the previously published approach) in tracking thermal changes in phantoms where electrical properties vary linearly with temperature over a range relevant to hyperthermia cancer therapy. PMID:12944168

  3. Improved restoration algorithm for weakly blurred and strongly noisy image

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Xia, Guo; Zhou, Haiyang; Bai, Jian; Yu, Feihong

    2015-10-01

    In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.

  4. X-ray holographic microscopy: Improved images of zymogen granules

    SciTech Connect

    Jacobsen, C.; Howells, M.; Kirz, J.; McQuaid, K.; Rothman, S.

    1988-10-01

    Soft x-ray holography has long been considered as a technique for x-ray microscopy. It has been only recently, however, that sub-micron resolution has been obtained in x-ray holography. This paper will concentrate on recent progress we have made in obtaining reconstructed images of improved quality. 15 refs., 6 figs.

  5. Improved QD-BRET conjugates for detection and imaging

    PubMed Central

    Xing, Yun; So, Min-kyung; Koh, Ai-leen; Sinclair, Robert; Rao, Jianghong

    2008-01-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dots bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness & photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holds the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell trafficking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies. PMID:18468518

  6. Improved QD-BRET conjugates for detection and imaging

    SciTech Connect

    Xing Yun; So, Min-kyung; Koh, Ai Leen; Sinclair, Robert; Rao Jianghong

    2008-08-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dot bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness and photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holding the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell tracking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies.

  7. Improvement of automated image stitching system for DR X-ray images.

    PubMed

    Yang, Fan; He, Yan; Deng, Zhen Sheng; Yan, Ang

    2016-04-01

    The full bone structure of X-ray images cannot be captured in a single scan with Digital radiography (DR) system. The stitching method of X-ray images is very important for scoliosis or lower limb malformation diagnosing and pre-surgical planning. Based on the image registration technology, this paper proposes a new automated image stitching method for full-spine and lower limb X-ray images. The stitching method utilized down-sampling to decrease the size of image and reduce the amount of computation; improved phase correlation algorithm was adopted to find the overlapping region; correlation coefficient was used to evaluate the similarity of overlapping region; weighted blending is brought in to produce a panorama image. The performance of the proposed method was evaluated by 40 pairs of images from patients with scoliosis or lower limb malformation. The stitching method was fully automated without any user input required. The experimental results were compared with previous methods by analyzing the same database. It is demonstrated that the improved phase correlation has higher accuracy and shorter average stitching time than previous methods. It could tackle problems including image translation, rotation and small overlapping in image stitching. PMID:26914239

  8. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  9. Characterization and analysis of infrared images

    NASA Astrophysics Data System (ADS)

    Raglin, Adrienne; Wetmore, Alan; Ligon, David

    2006-05-01

    Stokes images in the long-wave infrared (LWIR) and methods for processing polarimetric data continue to be areas of interest. Stokes images which are sensitive to geometry and material differences are acquired by measuring the polarization state of the received electromagnetic radiation. The polarimetric data from Stokes images may provide enhancements to conventional IR imagery data. It is generally agreed that polarimetric images can reveal information about objects or features within a scene that are not available through other imaging techniques. This additional information may generate different approaches to segmentation, detection, and recognition of objects or features. Previous research where horizontal and vertical polarization data is used supports the use of this type of data for image processing tasks. In this work we analyze a sample polarimetric image to show both improved segmentation of objects and derivation of their inherent 3-D geometry.

  10. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  11. Improving RLRN Image Splicing Detection with the Use of PCA and Kernel PCA

    PubMed Central

    Jalab, Hamid A.; Md Noor, Rafidah

    2014-01-01

    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images. PMID:25295304

  12. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d+ = 0.38), a small-to-medium reduction in beauty ideal internalisation (d+ = -0.37), and a large reduction in social comparison tendencies (d+ = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  13. Improved Image Registration by Sparse Patch-Based Deformation Estimation

    PubMed Central

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2014-01-01

    Despite of intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation towards the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) For each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) A small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients. (4) We

  14. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  15. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  16. Fundamental performance improvement to dispersive spectrograph based imaging technologies

    NASA Astrophysics Data System (ADS)

    Meade, Jeff T.; Behr, Bradford B.; Cenko, Andrew T.; Christensen, Peter; Hajian, Arsen R.; Hendrikse, Jan; Sweeney, Frederic D.

    2011-03-01

    Dispersive-based spectrometers may be qualified by their spectral resolving power and their throughput efficiency. A device known as a virtual slit is able to improve the resolving power by factors of several with a minimal loss in throughput, thereby fundamentally improving the quality of the spectrometer. A virtual slit was built and incorporated into a low performing spectrometer (R ~ 300) and was shown to increase the performance without a significant loss in signal. The operation and description of virtual slits is also given. High-performance, lowlight, and high-speed imaging instruments based on a dispersive-type spectrometer see the greatest impact from a virtual slit. The impact of a virtual slit on spectral domain optical coherence tomography (SD-OCT) is shown to improve the imaging quality substantially.

  17. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  18. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  19. Conductive resins improve charging and resolution of acquired images in electron microscopic volume imaging.

    PubMed

    Nguyen, Huy Bang; Thai, Truc Quynh; Saitoh, Sei; Wu, Bao; Saitoh, Yurika; Shimo, Satoshi; Fujitani, Hiroshi; Otobe, Hirohide; Ohno, Nobuhiko

    2016-01-01

    Recent advances in serial block-face imaging using scanning electron microscopy (SEM) have enabled the rapid and efficient acquisition of 3-dimensional (3D) ultrastructural information from a large volume of biological specimens including brain tissues. However, volume imaging under SEM is often hampered by sample charging, and typically requires specific sample preparation to reduce charging and increase image contrast. In the present study, we introduced carbon-based conductive resins for 3D analyses of subcellular ultrastructures, using serial block-face SEM (SBF-SEM) to image samples. Conductive resins were produced by adding the carbon black filler, Ketjen black, to resins commonly used for electron microscopic observations of biological specimens. Carbon black mostly localized around tissues and did not penetrate cells, whereas the conductive resins significantly reduced the charging of samples during SBF-SEM imaging. When serial images were acquired, embedding into the conductive resins improved the resolution of images by facilitating the successful cutting of samples in SBF-SEM. These results suggest that improving the conductivities of resins with a carbon black filler is a simple and useful option for reducing charging and enhancing the resolution of images obtained for volume imaging with SEM. PMID:27020327

  20. The use of image morphing to improve the detection of tumors in emission imaging

    SciTech Connect

    Dykstra, C.; Greer, K.; Jaszczak, R.; Celler, A.

    1999-06-01

    Two of the limitations on the utility of SPECT and planar scintigraphy for the non-invasive detection of carcinoma are the small sizes of many tumors and the possible low contrast between tumor uptake and background. This is particularly true for breast imaging. Use of some form of image processing can improve the visibility of tumors which are at the limit of hardware resolution. Smoothing, by some form of image averaging, either during or post-reconstruction, is widely used to reduce noise and thereby improve the detectability of regions of elevated activity. However, smoothing degrades resolution and, by averaging together closely spaced noise, may make noise look like a valid region of increased uptake. Image morphing by erosion and dilation does not average together image values; it instead selectively removes small features and irregularities from an image without changing the larger features. Application of morphing to emission images has shown that it does not, therefore, degrade resolution and does not always degrade contrast. For these reasons it may be a better method of image processing for noise removal in some images. In this paper the authors present a comparison of the effects of smoothing and morphing using breast and liver studies.

  1. Conductive resins improve charging and resolution of acquired images in electron microscopic volume imaging

    PubMed Central

    Nguyen, Huy Bang; Thai, Truc Quynh; Saitoh, Sei; Wu, Bao; Saitoh, Yurika; Shimo, Satoshi; Fujitani, Hiroshi; Otobe, Hirohide; Ohno, Nobuhiko

    2016-01-01

    Recent advances in serial block-face imaging using scanning electron microscopy (SEM) have enabled the rapid and efficient acquisition of 3-dimensional (3D) ultrastructural information from a large volume of biological specimens including brain tissues. However, volume imaging under SEM is often hampered by sample charging, and typically requires specific sample preparation to reduce charging and increase image contrast. In the present study, we introduced carbon-based conductive resins for 3D analyses of subcellular ultrastructures, using serial block-face SEM (SBF-SEM) to image samples. Conductive resins were produced by adding the carbon black filler, Ketjen black, to resins commonly used for electron microscopic observations of biological specimens. Carbon black mostly localized around tissues and did not penetrate cells, whereas the conductive resins significantly reduced the charging of samples during SBF-SEM imaging. When serial images were acquired, embedding into the conductive resins improved the resolution of images by facilitating the successful cutting of samples in SBF-SEM. These results suggest that improving the conductivities of resins with a carbon black filler is a simple and useful option for reducing charging and enhancing the resolution of images obtained for volume imaging with SEM. PMID:27020327

  2. Multi-focus image fusion based on improved spectral graph wavelet transform

    NASA Astrophysics Data System (ADS)

    Yan, Xiang; Qin, Hanlin; Chen, Zhimin; Zhou, Huixin; Li, Jia; Zong, Jingguo

    2015-10-01

    Due to the limited depth-of-focus of optical lenses in imaging camera, it is impossible to acquire an image with all parts of the scene in focus. To make up for this defect, fusing the images at different focus settings into one image is a potential approach and many fusion methods have been developed. However, the existing methods can hardly deal with the problem of image detail blur. In this paper, a novel multiscale geometrical analysis called the directional spectral graph wavelet transform (DSGWT) is proposed, which integrates the nonsubsampled directional filter bank with the traditional spectral graph wavelet transform. Through combines the feature of efficiently representing the image containing regular or irregular areas of the spectral graph wavelet transform with the ability of capturing the directional information of the directional filter bank, the DSGWT can better represent the structure of images. Given the feature of the DSGWT, it is introduced to multi-focus image fusion to overcome the above disadvantage. On the one hand, using the high frequency subbands of the source images are obtained by the DSGWT, the proposed method efficiently represents the source images. On the other hand, using morphological filter to process the sparse feature matrix obtained by sum-modified-Laplacian focus measure criterion, the proposed method generates the fused subbands by morphological filtering. Comparison experiments have been performed on different image sets, and the experimental results demonstrate that the proposed method does significantly improve the fusion performance compared to the existing fusion methods.

  3. Image guidance improves localization of sonographically occult colorectal liver metastases

    NASA Astrophysics Data System (ADS)

    Leung, Universe; Simpson, Amber L.; Adams, Lauryn B.; Jarnagin, William R.; Miga, Michael I.; Kingham, T. Peter

    2015-03-01

    Assessing the therapeutic benefit of surgical navigation systems is a challenging problem in image-guided surgery. The exact clinical indications for patients that may benefit from these systems is not always clear, particularly for abdominal surgery where image-guidance systems have failed to take hold in the same way as orthopedic and neurosurgical applications. We report interim analysis of a prospective clinical trial for localizing small colorectal liver metastases using the Explorer system (Path Finder Technologies, Nashville, TN). Colorectal liver metastases are small lesions that can be difficult to identify with conventional intraoperative ultrasound due to echogeneity changes in the liver as a result of chemotherapy and other preoperative treatments. Interim analysis with eighteen patients shows that 9 of 15 (60%) of these occult lesions could be detected with image guidance. Image guidance changed intraoperative management in 3 (17%) cases. These results suggest that image guidance is a promising tool for localization of small occult liver metastases and that the indications for image-guided surgery are expanding.

  4. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  5. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  6. The Hinode/XRT Full-Sun Image Corrections and the Improved Synoptic Composite Image Archive

    NASA Astrophysics Data System (ADS)

    Takeda, Aki; Yoshimura, Keiji; Saar, Steven H.

    2016-01-01

    The XRT Synoptic Composite Image Archive (SCIA) is a storage and gallery of X-ray full-Sun images obtained through the synoptic program of the X-Ray Telescope (XRT) onboard the Hinode satellite. The archived images provide a quick history of solar activity through the daily and monthly layout pages and long-term data for morphological and quantitative studies of the X-ray corona. This article serves as an introduction to the SCIA, i.e., to the structure of the archive and specification of the data products included therein. We also describe a number of techniques used to improve the quality of the archived images: preparation of composite images to increase intensity dynamic range, removal of dark spots that are due to contaminants on the CCD, and correction of the visible stray light contamination that has been detected on the Ti-poly and C-poly filter images since May 2012.

  7. Research on an Improved Medical Image Enhancement Algorithm Based on P-M Model.

    PubMed

    Dong, Beibei; Yang, Jingjing; Hao, Shangfu; Zhang, Xiao

    2015-01-01

    Image enhancement can improve the detail of the image and so as to achieve the purpose of the identification of the image. At present, the image enhancement is widely used in medical images, which can help doctor's diagnosis. IEABPM (Image Enhancement Algorithm Based on P-M Model) is one of the most common image enhancement algorithms. However, it may cause the lost of the texture details and other features. To solve the problems, this paper proposes an IIEABPM (Improved Image Enhancement Algorithm Based on P-M Model). Simulation demonstrates that IIEABPM can effectively solve the problems of IEABPM, and improve image clarity, image contrast, and image brightness. PMID:26628929

  8. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  9. Analysis of the relationship between the volumetric soil moisture content and the NDVI from high resolution multi-spectral images for definition of vineyard management zones to improve irrigation

    NASA Astrophysics Data System (ADS)

    Martínez-Casasnovas, J. A.; Ramos, M. C.

    2009-04-01

    As suggested by previous research in the field of precision viticulture, intra-field yield variability is dependent on the variation of soil properties, and in particular the soil moisture content. Since the mapping in detail of this soil property for precision viticulture applications is highly costly, the objective of the present research is to analyse its relationship with the normalised difference vegetation index from high resolution satellite images to the use it in the definition of vineyard zonal management. The final aim is to improve irrigation in commercial vineyard blocks for better management of inputs and to deliver a more homogeneous fruit to the winery. The study was carried out in a vineyard block located in Raimat (NE Spain, Costers del Segre Designation of Origin). This is a semi-arid area with continental Mediterranean climate and a total annual precipitation between 300-400 mm. The vineyard block (4.5 ha) is planted with Syrah vines in a 3x2 m pattern. The vines are irrigated by means of drips under a partial root drying schedule. Initially, the irrigation sectors had a quadrangular distribution, with a size of about 1 ha each. Yield is highly variable within the block, presenting a coefficient of variation of 24.9%. For the measurement of the soil moisture content a regular sampling grid of 30 x 40 m was defined. This represents a sample density of 8 samples ha-1. At the nodes of the grid, TDR (Time Domain Reflectometer) probe tubes were permanently installed up to the 80 cm or up to reaching a contrasting layer. Multi-temporal measures were taken at different depths (each 20 cm) between November 2006 and December 2007. For each date, a map of the variability of the profile soil moisture content was interpolated by means of geostatistical analysis: from the measured values at the grid points the experimental variograms were computed and modelled and global block kriging (10 m squared blocks) undertaken with a grid spacing of 3 m x 3 m. On the

  10. Conducting a SWOT Analysis for Program Improvement

    ERIC Educational Resources Information Center

    Orr, Betsy

    2013-01-01

    A SWOT (strengths, weaknesses, opportunities, and threats) analysis of a teacher education program, or any program, can be the driving force for implementing change. A SWOT analysis is used to assist faculty in initiating meaningful change in a program and to use the data for program improvement. This tool is useful in any undergraduate or degree…

  11. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  12. Improving image accuracy of region-of-interest in cone-beam CT using prior image.

    PubMed

    Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong

    2014-01-01

    In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image. PMID:24710451

  13. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  14. Factor Analysis of the Image Correlation Matrix.

    ERIC Educational Resources Information Center

    Kaiser, Henry F.; Cerny, Barbara A.

    1979-01-01

    Whether to factor the image correlation matrix or to use a new model with an alpha factor analysis of it is mentioned, with particular reference to the determinacy problem. It is pointed out that the distribution of the images is sensibly multivariate normal, making for "better" factor analyses. (Author/CTM)

  15. Viewing angle analysis of integral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Xia; Wu, Chun-Hong; Yang, Yang; Zhang, Lan

    2007-12-01

    Integral imaging (II) is a technique capable of displaying 3D images with continuous parallax in full natural color. It is becoming the most perspective technique in developing next generation three-dimensional TV (3DTV) and visualization field due to its outstanding advantages. However, most of conventional integral images are restricted by its narrow viewing angle. One reason is that the range in which a reconstructed integral image can be displayed with consistent parallax is limited. The other is that the aperture of system is finite. By far many methods , an integral imaging method to enhance the viewing angle of integral images has been proposed. Nevertheless, except Ren's MVW (Maximum Viewing Width) most of these methods involve complex hardware and modifications of optical system, which usually bring other disadvantages and make operation more difficult. At the same time the cost of these systems should be higher. In order to simplify optical systems, this paper systematically analyzes the viewing angle of traditional integral images instead of modified ones. Simultaneously for the sake of cost the research was based on computer generated integral images (CGII). With the analysis result we can know clearly how the viewing angle can be enhanced and how the image overlap or image flipping can be avoided. The result also promotes the development of optical instruments. Based on theoretical analysis, preliminary calculation was done to demonstrate how the other viewing properties which are closely related with the viewing angle, such as viewing distance, viewing zone, lens pitch, and etc. affect the viewing angle.

  16. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  17. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  18. ECG-synchronized DSA exposure control: improved cervicothoracic image quality

    SciTech Connect

    Kelly, W.M.; Gould, R.; Norman, D.; Brant-Zawadzki, M.; Cox, L.

    1984-10-01

    An electrocardiogram (ECG)-synchronized x-ray exposure sequence was used to acquire digital subtraction angiographic (DSA) images during 13 arterial injection studies of the aortic arch or carotid bifurcations. These gated images were compared with matched ungated DSA images acquired using the same technical factors, contrast material volume, and patient positioning. Subjective assessments by five experienced observers of edge definition, vessel conspicuousness, and overall diagnostic quality showed overall preference for one of the two acquisition methods in 69% of cases studied. Of these, the ECG-synchronized exposure series were rated superior in 76%. These results, as well as the relatively simple and inexpensive modifications required, suggest that routine use of ECG exposure control can facilitate improved arterial DSA evaluations of suspected cervicothoracic vascular disease.

  19. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  20. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  1. Improved Starting Materials for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An improved type of starting materials for the fabrication of silicon-based imaging integrated circuits that include back-illuminated photodetectors has been conceived, and a process for making these starting materials is undergoing development. These materials are intended to enable reductions in dark currents and increases in quantum efficiencies, relative to those of comparable imagers made from prior silicon-on-insulator (SOI) starting materials. Some background information is prerequisite to a meaningful description of the improved starting materials and process. A prior SOI starting material, depicted in the upper part the figure, includes: a) A device layer on the front side, typically between 2 and 20 m thick, made of p-doped silicon (that is, silicon lightly doped with an electron acceptor, which is typically boron); b) A buried oxide (BOX) layer (that is, a buried layer of oxidized silicon) between 0.2 and 0.5 m thick; and c) A silicon handle layer (also known as a handle wafer) on the back side, between about 600 and 650 m thick. After fabrication of the imager circuitry in and on the device layer, the handle wafer is etched away, the BOX layer acting as an etch stop. In subsequent operation of the imager, light enters from the back, through the BOX layer. The advantages of back illumination over front illumination have been discussed in prior NASA Tech Briefs articles.

  2. Improved 3D cellular imaging by multispectral focus assessment

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  3. Linear digital imaging system fidelity analysis

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.

    1989-01-01

    The combined effects of imaging gathering, sampling and reconstruction are analyzed in terms of image fidelity. The analysis is based upon a standard end-to-end linear system model which is sufficiently general so that the results apply to most line-scan and sensor-array imaging systems. Shift-variant sampling effects are accounted for with an expected value analysis based upon the use of a fixed deterministic input scene which is randomly shifted (mathematically) relative to the sampling grid. This random sample-scene phase approach has been used successfully by the author and associates in several previous related papers.

  4. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  5. Malware Analysis Using Visualized Image Matrices

    PubMed Central

    Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  6. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  7. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  8. Multi-clues image retrieval based on improved color invariants

    NASA Astrophysics Data System (ADS)

    Liu, Liu; Li, Jian-Xun

    2012-05-01

    At present, image retrieval has a great progress in indexing efficiency and memory usage, which mainly benefits from the utilization of the text retrieval technology, such as the bag-of-features (BOF) model and the inverted-file structure. Meanwhile, because the robust local feature invariants are selected to establish BOF, the retrieval precision of BOF is enhanced, especially when it is applied to a large-scale database. However, these local feature invariants mainly consider the geometric variance of the objects in the images, and thus the color information of the objects fails to be made use of. Because of the development of the information technology and Internet, the majority of our retrieval objects is color images. Therefore, retrieval performance can be further improved through proper utilization of the color information. We propose an improved method through analyzing the flaw of shadow-shading quasi-invariant. The response and performance of shadow-shading quasi-invariant for the object edge with the variance of lighting are enhanced. The color descriptors of the invariant regions are extracted and integrated into BOF based on the local feature. The robustness of the algorithm and the improvement of the performance are verified in the final experiments.

  9. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  10. Improvements in Intrinsic Feature Pose Measurement for Awake Animal Imaging

    SciTech Connect

    Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon; Weisenberger, A G; McKisson, J; Smith, M F; Stolin, Alexander

    2010-01-01

    Development has continued with intrinsic feature optical motion tracking for awake animal imaging to measure 3D position and orientation (pose) for motion compensated reconstruction. Prior imaging results have been directed towards head motion measurement for SPECT brain studies in awake unrestrained mice. This work improves on those results in extracting and tracking intrinsic features from multiple camera images and computing pose changes from the tracked features over time. Previously, most motion tracking for 3D imaging has been limited to measuring extrinsic features such as retro-reflective markers applied to an animal s head. While this approach has been proven to be accurate, the use of external markers is undesirable for several reasons. The intrinsic feature approach has been further developed from previous work to provide full pose measurements for a live mouse scan. Surface feature extraction, matching, and pose change calculation with point tracking and accuracy results are described. Experimental pose calculation and 3D reconstruction results from live images are presented.

  11. Improvements in intrinsic feature pose measurement for awake animal imaging

    SciTech Connect

    J.S. Goddard, J.S. Baba, S.J. Lee, A.G. Weisenberger, A. Stolin, J. McKisson, M.F. Smith

    2011-06-01

    Development has continued with intrinsic feature optical motion tracking for awake animal imaging to measure 3D position and orientation (pose) for motion compensated reconstruction. Prior imaging results have been directed towards head motion measurement for SPECT brain studies in awake unrestrained mice. This work improves on those results in extracting and tracking intrinsic features from multiple camera images and computing pose changes from the tracked features over time. Previously, most motion tracking for 3D imaging has been limited to measuring extrinsic features such as retro-reflective markers applied to an animal's head. While this approach has been proven to be accurate, the use of external markers is undesirable for several reasons. The intrinsic feature approach has been further developed from previous work to provide full pose measurements for a live mouse scan. Surface feature extraction, matching, and pose change calculation with point tracking and accuracy results are described. Experimental pose calculation and 3D reconstruction results from live images are presented.

  12. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  13. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  14. Fabrication and characteristics of experimental radiographic amplifier screens. [image transducers with improved image contrast and resolution

    NASA Technical Reports Server (NTRS)

    Szepesi, Z.

    1978-01-01

    The fabrication process and transfer characteristics for solid state radiographic image transducers (radiographic amplifier screens) are described. These screens are for use in realtime nondestructive evaluation procedures that require large format radiographic images with contrast and resolution capabilities unavailable with conventional fluoroscopic screens. The screens are suitable for in-motion, on-line radiographic inspection by means of closed circuit television. Experimental effort was made to improve image quality and response to low energy (5 kV and up) X-rays.

  15. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  16. Body image change and improved eating self-regulation in a weight management intervention in women

    PubMed Central

    2011-01-01

    Background Successful weight management involves the regulation of eating behavior. However, the specific mechanisms underlying its successful regulation remain unclear. This study examined one potential mechanism by testing a model in which improved body image mediated the effects of obesity treatment on eating self-regulation. Further, this study explored the role of different body image components. Methods Participants were 239 overweight women (age: 37.6 ± 7.1 yr; BMI: 31.5 ± 4.1 kg/m2) engaged in a 12-month behavioral weight management program, which included a body image module. Self-reported measures were used to assess evaluative and investment body image, and eating behavior. Measurements occurred at baseline and at 12 months. Baseline-residualized scores were calculated to report change in the dependent variables. The model was tested using partial least squares analysis. Results The model explained 18-44% of the variance in the dependent variables. Treatment significantly improved both body image components, particularly by decreasing its investment component (f2 = .32 vs. f2 = .22). Eating behavior was positively predicted by investment body image change (p < .001) and to a lesser extent by evaluative body image (p < .05). Treatment had significant effects on 12-month eating behavior change, which were fully mediated by investment and partially mediated by evaluative body image (effect ratios: .68 and .22, respectively). Conclusions Results suggest that improving body image, particularly by reducing its salience in one's personal life, might play a role in enhancing eating self-regulation during weight control. Accordingly, future weight loss interventions could benefit from proactively addressing body image-related issues as part of their protocols. PMID:21767360

  17. Improved volumetric imaging in tomosynthesis using combined multiaxial sweeps.

    PubMed

    Gersh, Jacob A; Wiant, David B; Best, Ryan C M; Bennett, Marcus C; Munley, Michael T; King, June D; McKee, Mahta M; Baydush, Alan H

    2010-01-01

    This study explores the volumetric reconstruction fidelity attainable using tomosynthesis with a kV imaging system which has a unique ability to rotate isocentrically and with multiple degrees of mechanical freedom. More specifically, we seek to investigate volumetric reconstructions by combining multiple limited-angle rotational image acquisition sweeps. By comparing these reconstructed images with those of a CBCT reconstruction, we can gauge the volumetric fidelity of the reconstructions. In surgical situations, the described tomosynthesis-based system could provide high-quality volumetric imaging without requiring patient motion, even with rotational limitations present. Projections were acquired using the Digital Integrated Brachytherapy Unit, or IBU-D. A phantom was used which contained several spherical objects of varying contrast. Using image projections acquired during isocentric sweeps around the phantom, reconstructions were performed by filtered backprojection. For each image acquisition sweep configuration, a contrasting sphere is analyzed using two metrics and compared to a gold standard CBCT reconstruction. Since the intersection of a reconstructed sphere and an imaging plane is ideally a circle with an eccentricity of zero, the first metric presented compares the effective eccentricity of intersections of reconstructed volumes and imaging planes. As another metric of volumetric reconstruction fidelity, the volume of one of the contrasting spheres was determined using manual contouring. By comparing these manually delineated volumes with a CBCT reconstruction, we can gauge the volumetric fidelity of reconstructions. The configuration which yielded the highest overall volumetric reconstruction fidelity, as determined by effective eccentricities and volumetric contouring, consisted of two orthogonally-offset 60° L-arm sweeps and a single C-arm sweep which shared a pivot point with one the L-arm sweeps. When compared to a similar configuration that

  18. Improving Image Quality of Bronchial Arteries with Virtual Monochromatic Spectral CT Images

    PubMed Central

    Ma, Guangming; He, Taiping; Yu, Yong; Duan, Haifeng; Yang, Chuangbo

    2016-01-01

    Objective To evaluate the clinical value of using monochromatic images in spectral CT pulmonary angiography to improve image quality of bronchial arteries. Methods We retrospectively analyzed the chest CT images of 38 patients who underwent contrast-enhanced spectral CT. These images included a set of 140kVp polychromatic images and the default 70keV monochromatic images. Using the standard Gemstone Spectral Imaging (GSI) viewer on an advanced workstation (AW4.6,GE Healthcare), an optimal energy level (in keV) for obtaining the best contrast-to-noise ratio (CNR) for the artery could be automatically obtained. The signal-to-noise ratio (SNR), CNR and objective image quality score (1–5) for these 3 image sets (140kVp, 70keV and optimal energy level) were obtained and, statistically compared. The image quality score consistency between the two observers was also evaluated using Kappa test. Results The optimal energy levels for obtaining the best CNR were 62.58±2.74keV.SNR and CNR from the 140kVp polychromatic, 70keV and optimal keV monochromatic images were (16.44±5.85, 13.24±5.52), (20.79±7.45, 16.69±6.27) and (24.9±9.91, 20.53±8.46), respectively. The corresponding subjective image quality scores were 1.97±0.82, 3.24±0.75, and 4.47±0.60. SNR, CNR and subjective scores had significant difference among groups (all p<0.001). The optimal keV monochromatic images were superior to the 70keV monochromatic and 140kVp polychromatic images, and there was high agreement between the two observers on image quality score (kappa>0.80). Conclusions Virtual monochromatic images at approximately 63keV in dual-energy spectral CT pulmonary angiography yielded the best CNR and highest diagnostic confidence for imaging bronchial arteries. PMID:26967737

  19. An improved viscous characteristics analysis program

    NASA Technical Reports Server (NTRS)

    Jenkins, R. V.

    1978-01-01

    An improved two dimensional characteristics analysis program is presented. The program is built upon the foundation of a FORTRAN program entitled Analysis of Supersonic Combustion Flow Fields With Embedded Subsonic Regions. The major improvements are described and a listing of the new program is provided. The subroutines and their functions are given as well as the input required for the program. Several applications of the program to real problems are qualitatively described. Three runs obtained in the investigation of a real problem are presented to provide insight for the input and output of the program.

  20. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  1. Improving transient analysis technology for aircraft structures

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Chargin, Mladen

    1989-01-01

    Aircraft dynamic analyses are demanding of computer simulation capabilities. The modeling complexities of semi-monocoque construction, irregular geometry, high-performance materials, and high-accuracy analysis are present. At issue are the safety of the passengers and the integrity of the structure for a wide variety of flight-operating and emergency conditions. The technology which supports engineering of aircraft structures using computer simulation is examined. Available computer support is briefly described and improvement of accuracy and efficiency are recommended. Improved accuracy of simulation will lead to a more economical structure. Improved efficiency will result in lowering development time and expense.

  2. Systems Improved Numerical Fluids Analysis Code

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1990-01-01

    Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to April, 1983, version of SINDA. Additional routines provide for mathematical modeling of active heat-transfer loops. Simulates steady-state and pseudo-transient operations of 16 different components of heat-transfer loops, including radiators, evaporators, condensers, mechanical pumps, reservoirs, and many types of valves and fittings. Program contains property-analysis routine used to compute thermodynamic properties of 20 different refrigerants. Source code written in FORTRAN 77.

  3. Poster — Thur Eve — 15: Improvements in the stability of the tomotherapy imaging beam

    SciTech Connect

    Belec, J

    2014-08-15

    Use of helical TomoTherapy based MVCT imaging for adaptive planning requires the image values (HU) to remain stable over the course of treatment. In the past, the image value stability was suboptimal, which required frequent change to the image value to density calibration curve to avoid dose errors on the order of 2–4%. The stability of the image values at our center was recently improved by stabilizing the dose rate of the machine (dose control servo) and performing daily MVCT calibration corrections. In this work, we quantify the stability of the image values over treatment time by comparing patient treatment image density derived using MVCT and KVCT. The analysis includes 1) MVCT - KVCT density difference histogram, 2) MVCT vs KVCT density spectrum, 3) multiple average profile density comparison and 4) density difference in homogeneous locations. Over two months, the imaging beam stability was compromised several times due to a combination of target wobbling, spectral calibration, target change and magnetron issues. The stability of the image values were analyzed over the same period. Results show that the impact on the patient dose calculation is 0.7% +− 0.6%.

  4. Improving the imaging of calcifications in CT by histogram-based selective deblurring

    NASA Astrophysics Data System (ADS)

    Rollano-Hijarrubia, Empar; van der Meer, Frits; van der Lugt, Add; Weinans, Harrie; Vrooman, Henry; Vossepoel, Albert; Stokking, Rik

    2005-04-01

    Imaging of small high-density structures, such as calcifications, with computed tomography (CT) is limited by the spatial resolution of the system. Blur causes small calcifications to be imaged with lower contrast and overestimated volume, thereby hampering the analysis of vessels. The aim of this work is to reduce the blur of calcifications by applying three-dimensional (3D) deconvolution. Unfortunately, the high-frequency amplification of the deconvolution produces edge-related ring artifacts and enhances noise and original artifacts, which degrades the imaging of low-density structures. A method, referred to as Histogram-based Selective Deblurring (HiSD), was implemented to avoid these negative effects. HiSD uses the histogram information to generate a restored image in which the low-intensity voxel information of the observed image is combined with the high-intensity voxel information of the deconvolved image. To evaluate HiSD we scanned four in-vitro atherosclerotic plaques of carotid arteries with a multislice spiral CT and with a microfocus CT (μCT), used as reference. Restored images were generated from the observed images, and qualitatively and quantitatively compared with their corresponding μCT images. Transverse views and maximum-intensity projections of restored images show the decrease of blur of the calcifications in 3D. Measurements of the areas of 27 calcifications and total volumes of calcification of 4 plaques show that the overestimation of calcification was smaller for restored images (mean-error: 90% for area; 92% for volume) than for observed images (143%; 213%, respectively). The qualitative and quantitative analyses show that the imaging of calcifications in CT can be improved considerably by applying HiSD.

  5. EPA guidance on improving the image of psychiatry.

    PubMed

    Möller-Leimkühler, A M; Möller, H-J; Maier, W; Gaebel, W; Falkai, P

    2016-03-01

    This paper explores causes, explanations and consequences of the negative image of psychiatry and develops recommendations for improvement. It is primarily based on a WPA guidance paper on how to combat the stigmatization of psychiatry and psychiatrists and a Medline search on related publications since 2010. Furthermore, focussing on potential causes and explanations, the authors performed a selective literature search regarding additional image-related issues such as mental health literacy and diagnostic and treatment issues. Underestimation of psychiatry results from both unjustified prejudices of the general public, mass media and healthcare professionals and psychiatry's own unfavourable coping with external and internal concerns. Issues related to unjustified devaluation of psychiatry include overestimation of coercion, associative stigma, lack of public knowledge, need to simplify complex mental issues, problem of the continuum between normality and psychopathology, competition with medical and non-medical disciplines and psychopharmacological treatment. Issues related to psychiatry's own contribution to being underestimated include lack of a clear professional identity, lack of biomarkers supporting clinical diagnoses, limited consensus about best treatment options, lack of collaboration with other medical disciplines and low recruitment rates among medical students. Recommendations are proposed for creating and representing a positive self-concept with different components. The negative image of psychiatry is not only due to unfavourable communication with the media, but is basically a problem of self-conceptualization. Much can be improved. However, psychiatry will remain a profession with an exceptional position among the medical disciplines, which should be seen as its specific strength. PMID:26874959

  6. Improving reliability of live/dead cell counting through automated image mosaicing.

    PubMed

    Piccinini, Filippo; Tesei, Anna; Paganelli, Giulia; Zoli, Wainer; Bevilacqua, Alessandro

    2014-12-01

    Cell counting is one of the basic needs of most biological experiments. Numerous methods and systems have been studied to improve the reliability of counting. However, at present, manual cell counting performed with a hemocytometer still represents the gold standard, despite several problems limiting reproducibility and repeatability of the counts and, at the end, jeopardizing their reliability in general. We present our own approach based on image processing techniques to improve counting reliability. It works in two stages: first building a high-resolution image of the hemocytometer's grid, then counting the live and dead cells by tagging the image with flags of different colours. In particular, we introduce GridMos (http://sourceforge.net/p/gridmos), a fully-automated mosaicing method to obtain a mosaic representing the whole hemocytometer's grid. In addition to offering more significant statistics, the mosaic "freezes" the culture status, thus permitting analysis by more than one operator. Finally, the mosaic achieved can thus be tagged by using an image editor, thus markedly improving counting reliability. The experiments performed confirm the improvements brought about by the proposed counting approach in terms of both reproducibility and repeatability, also suggesting the use of a mosaic of an entire hemocytometer's grid, then labelled trough an image editor, as the best likely candidate for the new gold standard method in cell counting. PMID:25438936

  7. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  8. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  9. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  10. An improved border detection in dermoscopy images for density based clustering

    PubMed Central

    2011-01-01

    Background Dermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. In current practice, dermatologists determine lesion area by manually drawing lesion borders. Therefore, automated assessment tools for dermoscopy images have become an important research field mainly because of inter- and intra-observer variations in human interpretation. One of the most important steps in dermoscopy image analysis is automated detection of lesion borders. To our knowledge, in our 2010 study we achieved one of the highest accuracy rates in the automated lesion border detection field by using modified density based clustering algorithm. In the previous study, we proposed a novel method which removes redundant computations in well-known spatial density based clustering algorithm, DBSCAN; thus, in turn it speeds up clustering process considerably. Findings Our previous study was heavily dependent on the pre-processing step which creates a binary image from original image. In this study, we embed a new distance measure to the existing algorithm. This provides twofold benefits. First, since new approach removes pre-processing step, it directly works on color images instead of binary ones. Thus, very important color information is not lost. Second, accuracy of delineated lesion borders is improved on 75% of 100 dermoscopy image dataset. Conclusion Previous and improved methods are tested within the same dermoscopy dataset along with the same set of dermatologist drawn ground truth images. Results revealed that the improved method directly works on color images without any pre-processing and generates more accurate results than existing method. PMID:22166058

  11. Magnetic resonance imaging: improving patients tolerance and safety

    SciTech Connect

    Weinreb, J.C.; Maravilla, K.R.; Peshock, R.; Payne, J.

    1984-12-01

    Some physicians have expressed the opinion that patients may not tolerate MR scans as well as they do computed tomographic (CT) scans for several reasons: (1) longer examination time, (2) the confined space in which a patient is placed for scanning, and (3) difficulties in communicating with the patient during scanning because of noise from the gradient coils and the necessity of eliminating all extraneous radiofrequency (RF) sources from the examination room. Furthermore, the presence of a powerful magnet as the heart of an MRI unit introduces management problems in the event of patient emergencies. With these potential difficulties in mind, the University of Texas Health Science Center at Dallas Southwestern Medical School NMR Imaging Center has implemented a program to improve patient tolerance and safety. The anticipated imminent proliferation of MRI units in many radiology departments indicates that it is an opportune time to share our experience with 450 patients and a commerically available 0.35-T superconducting imager.

  12. Improving PET spatial resolution and detectability for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Bal, H.; Guerin, L.; Casey, M. E.; Conti, M.; Eriksson, L.; Michel, C.; Fanti, S.; Pettinato, C.; Adler, S.; Choyke, P.

    2014-08-01

    Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%.

  13. Membrane composition analysis by imaging mass spectrometry

    SciTech Connect

    Boxer, S G; Kraft, M L; Longo, M; Hutcheon, I D; Weber, P K

    2006-03-29

    Membranes on solid supports offer an ideal format for imaging. Secondary ion mass spectrometry (SIMS) can be used to obtain composition information on membrane-associated components. Using the NanoSIMS50, images of composition variations in membrane domains can be obtained with a lateral resolution better than 100 nm. By suitable calibration, these variations in composition can be translated into a quantitative analysis of the membrane composition. Progress towards imaging small phase-separated lipid domains, membrane-associated proteins and natural biological membranes will be described.

  14. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  15. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  16. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains. PMID:24051775

  17. Improving image classification in a complex wetland ecosystem through image fusion techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit; Sinha, Priyakant; Taylor, Subhashni

    2014-01-01

    The aim of this study was to evaluate the impact of image fusion techniques on vegetation classification accuracies in a complex wetland system. Fusion of panchromatic (PAN) and multispectral (MS) Quickbird satellite imagery was undertaken using four image fusion techniques: Brovey, hue-saturation-value (HSV), principal components (PC), and Gram-Schmidt (GS) spectral sharpening. These four fusion techniques were compared in terms of their mapping accuracy to a normal MS image using maximum-likelihood classification (MLC) and support vector machine (SVM) methods. Gram-Schmidt fusion technique yielded the highest overall accuracy and kappa value with both MLC (67.5% and 0.63, respectively) and SVM methods (73.3% and 0.68, respectively). This compared favorably with the accuracies achieved using the MS image. Overall, improvements of 4.1%, 3.6%, 5.8%, 5.4%, and 7.2% in overall accuracies were obtained in case of SVM over MLC for Brovey, HSV, GS, PC, and MS images, respectively. Visual and statistical analyses of the fused images showed that the Gram-Schmidt spectral sharpening technique preserved spectral quality much better than the principal component, Brovey, and HSV fused images. Other factors, such as the growth stage of species and the presence of extensive background water in many parts of the study area, had an impact on classification accuracies.

  18. Actual drawing of histological images improves knowledge retention.

    PubMed

    Balemans, Monique C M; Kooloos, Jan G M; Donders, A Rogier T; Van der Zee, Catharina E E M

    2016-01-01

    Medical students have to process a large amount of information during the first years of their study, which has to be retained over long periods of nonuse. Therefore, it would be beneficial when knowledge is gained in a way that promotes long-term retention. Paper-and-pencil drawings for the uptake of form-function relationships of basic tissues has been a teaching tool for a long time, but now seems to be redundant with virtual microscopy on computer-screens and printers everywhere. Several studies claimed that, apart from learning from pictures, actual drawing of images significantly improved knowledge retention. However, these studies applied only immediate post-tests. We investigated the effects of actual drawing of histological images, using randomized cross-over design and different retention periods. The first part of the study concerned esophageal and tracheal epithelium, with 384 medical and biomedical sciences students randomly assigned to either the drawing or the nondrawing group. For the second part of the study, concerning heart muscle cells, students from the previous drawing group were now assigned to the nondrawing group and vice versa. One, four, and six weeks after the experimental intervention, the students were given a free recall test and a questionnaire or drawing exercise, to determine the amount of knowledge retention. The data from this study showed that knowledge retention was significantly improved in the drawing groups compared with the nondrawing groups, even after four or six weeks. This suggests that actual drawing of histological images can be used as a tool to improve long-term knowledge retention. PMID:26033842

  19. A pairwise image analysis with sparse decomposition

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-02-01

    This paper aims to detect the evolution between two images representing the same scene. The evolution detection problem has many practical applications, especially in medical images. Indeed, the concept of a patient "file" implies the joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The research presented in this paper is carried out within the application context of the development of computer assisted diagnosis (CAD) applied to mammograms. It is performed on already registered pair of images. As the registration is never perfect, we must develop a comparison method sufficiently adapted to detect real small differences between comparable tissues. In many applications, the assessment of similarity used during the registration step is also used for the interpretation step that yields to prompt suspicious regions. In our case registration is assumed to match the spatial coordinates of similar anatomical elements. In this paper, in order to process the medical images at tissue level, the image representation is based on elementary patterns, therefore seeking patterns, not pixels. Besides, as the studied images have low entropy, the decomposed signal is expressed in a parsimonious way. Parsimonious representations are known to help extract the significant structures of a signal, and generate a compact version of the data. This change of representation should allow us to compare the studied images in a short time, thanks to the low weight of the images thus represented, while maintaining a good representativeness. The good precision of our results show the approach efficiency.

  20. The electronic image stabilization technology research based on improved optical-flow motion vector estimation

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng

    2016-01-01

    The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.

  1. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  2. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  3. Improving imaging to optimize screening strategies for carotid artery stenosis.

    PubMed

    Pandya, Ankur; Gupta, Ajay

    2016-01-01

    Carotid stenosis is a major risk factor for ischemic stroke. Recently, the United States Preventive Services Task Force issued a recommendation against screening for carotid stenosis in the general population. There is the potential for efficient risk-stratifying or staged screening approaches that identify individuals at highest risk for carotid stenosis and stroke, but these tools have yet to be proven effective in external validation studies. In this paper, we review how medical imaging can be used to detect carotid stenosis and highlight several areas that could be improved to identify potentially efficient screening strategies for carotid stenosis. PMID:26275846

  4. Improved proton computed tomography by dual modality image reconstruction

    SciTech Connect

    Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  5. Improved phase imaging from intensity measurements in multiple planes

    SciTech Connect

    Soto, Marcos; Acosta, Eva

    2007-11-20

    Problems stemming from quantitative phase imaging from intensity measurements play a key role in many fields of physics. Techniques based on the transport of intensity equation require an estimate of the axial derivative of the intensity to invert the problem. Derivation formulas in two adjacent planes are commonly used to experimentally compute the derivative of the irradiance. Here we propose a formula that improves the estimate of the derivative by using a higher number of planes and taking the noisy nature of the measurements into account. We also establish an upper and lower limit for the estimate error and provide the distance between planes that optimizes the estimate of the derivative.

  6. Image analysis of insulation mineral fibres.

    PubMed

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view. PMID:11106965

  7. Automated eXpert Spectral Image Analysis

    Energy Science and Technology Software Center (ESTSC)

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limtedmore » number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X

  8. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future. PMID:20511080

  9. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results

  10. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  11. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  12. Image-Guided Radiation Therapy: the potential for imaging science research to improve cancer treatment outcomes

    NASA Astrophysics Data System (ADS)

    Williamson, Jeffrey

    2008-03-01

    The role of medical imaging in the planning and delivery of radiation therapy (RT) is rapidly expanding. This is being driven by two developments: Image-guided radiation therapy (IGRT) and biological image-based planning (BIBP). IGRT is the systematic use of serial treatment-position imaging to improve geometric targeting accuracy and/or to refine target definition. The enabling technology is the integration of high-performance three-dimensional (3D) imaging systems, e.g., onboard kilovoltage x-ray cone-beam CT, into RT delivery systems. IGRT seeks to adapt the patient's treatment to weekly, daily, or even real-time changes in organ position and shape. BIBP uses non-anatomic imaging (PET, MR spectroscopy, functional MR, etc.) to visualize abnormal tissue biology (angiogenesis, proliferation, metabolism, etc.) leading to more accurate clinical target volume (CTV) delineation and more accurate targeting of high doses to tissue with the highest tumor cell burden. In both cases, the goal is to reduce both systematic and random tissue localization errors (2-5 mm for conventional RT) conformality so that planning target volume (PTV) margins (varying from 8 to 20 mm in conventional RT) used to ensure target volume coverage in the presence of geometric error, can be substantially reduced. Reduced PTV expansion allows more conformal treatment of the target volume, increased avoidance of normal tissue and potential for safe delivery of more aggressive dose regimens. This presentation will focus on the imaging science challenges posed by the IGRT and BIBP. These issues include: Development of robust and accurate nonrigid image-registration (NIR) tools: Extracting locally nonlinear mappings that relate, voxel-by-voxel, one 3D anatomic representation of the patient to differently deformed anatomies acquired at different time points, is essential if IGRT is to move beyond simple translational treatment plan adaptations. NIR is needed to map segmented and labeled anatomy from the

  13. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B.

    2015-02-01

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.

  14. Endoscopic image analysis in semantic space.

    PubMed

    Kwitt, R; Vasconcelos, N; Rasiwasia, N; Uhl, A; Davis, B; Häfner, M; Wrba, F

    2012-10-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as (1) recognition of important semantic concepts, (2) semantically-focused image browsing, and (3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. PMID:22717411

  15. Endoscopic Image Analysis in Semantic Space

    PubMed Central

    Kwitt, R.; Vasconcelos, N.; Rasiwasia, N.; Uhl, A.; Davis, B.; Häfner, M.; Wrba, F.

    2013-01-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as 1) recognition of important semantic concepts, 2) semantically-focused image browsing, and 3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. PMID:22717411

  16. Improving cross-modal face recognition using polarimetric imaging.

    PubMed

    Short, Nathaniel; Hu, Shuowen; Gurram, Prudhvi; Gurton, Kristan; Chan, Alex

    2015-03-15

    We investigate the performance of polarimetric imaging in the long-wave infrared (LWIR) spectrum for cross-modal face recognition. For this work, polarimetric imagery is generated as stacks of three components: the conventional thermal intensity image (referred to as S0), and the two Stokes images, S1 and S2, which contain combinations of different polarizations. The proposed face recognition algorithm extracts and combines local gradient magnitude and orientation information from S0, S1, and S2 to generate a robust feature set that is well-suited for cross-modal face recognition. Initial results show that polarimetric LWIR-to-visible face recognition achieves an 18% increase in Rank-1 identification rate compared to conventional LWIR-to-visible face recognition. We conclude that a substantial improvement in automatic face recognition performance can be achieved by exploiting the polarization-state of radiance, as compared to using conventional thermal imagery. PMID:25768137

  17. Creating the "desired mindset": Philip Morris's efforts to improve its corporate image among women.

    PubMed

    McDaniel, Patricia A; Malone, Ruth E

    2009-01-01

    Through analysis of tobacco company documents, we explored how and why Philip Morris sought to enhance its corporate image among American women. Philip Morris regarded women as an influential political group. To improve its image among women, while keeping tobacco off their organizational agendas, the company sponsored women's groups and programs. It also sought to appeal to women it defined as "active moms" by advertising its commitment to domestic violence victims. It was more successful in securing women's organizations as allies than active moms. Increasing tobacco's visibility as a global women's health issue may require addressing industry influence. PMID:19851947

  18. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  19. Improved image placement performance of HL-7000M

    NASA Astrophysics Data System (ADS)

    Tanaka, Masaomi; Ito, Hiroyuki; Takahashi, Hiroyuki; Oonuki, Kazuyoshi; Kadowaki, Yasuhiro; Sato, Hidetoshi; Kawano, Hajime; Wang, Zhigang; Mizuno, Kazui; Matsuoka, Genya

    2003-12-01

    HL-7000M electron beam (EB) lithography system has been developed as a leading edge mask writer for the generation of 90 nm node production and 65 nm node development. It is capable of handling large volume data files such as full Optical Proximity Correction (OPC) patterns and angled patterns for System on Chip (SoC). Aiming at the technological requirements of the International Technology Roadmap for Semiconductors (ITRS) 2002 Update, a newly designed electron optics column generating a vector-scan variable shaped beam and a digital disposition system with a storage area network technology have been implemented into HL-7000M. This new high-resolution column and other mechanical components have restrained the beam drift and fluctuation factors. The improved octapole electrostatic deflectors with new dynamic focus correction and gain alignment methods have been built into the object lens system of the column. These enhanced features are worth mentioning due to the achievement of HL-7000M's Image Placement (IP) performance. Its accuracy in 3σ of a 14 x 14 global grid matching result over an area of 135 mm x 135 mm measured by Leica LMS IPRO are X: 6.09 nm and Y: 7.85 nm. In addition, the shot astigmatism correction has been in the development and testing process and is expected to improve the local image placement accuracy dramatically.

  20. Multispectral Imager With Improved Filter Wheel and Optics

    NASA Technical Reports Server (NTRS)

    Bremer, James C.

    2007-01-01

    Figure 1 schematically depicts an improved multispectral imaging system of the type that utilizes a filter wheel that contains multiple discrete narrow-band-pass filters and that is rotated at a constant high speed to acquire images in rapid succession in the corresponding spectral bands. The improvement, relative to prior systems of this type, consists of the measures taken to prevent the exposure of a focal-plane array (FPA) of photodetectors to light in more than one spectral band at any given time and to prevent exposure of the array to any light during readout. In prior systems, these measures have included, variously the use of mechanical shutters or the incorporation of wide opaque sectors (equivalent to mechanical shutters) into filter wheels. These measures introduce substantial dead times into each operating cycle intervals during which image information cannot be collected and thus incoming light is wasted. In contrast, the present improved design does not involve shutters or wide opaque sectors, and it reduces dead times substantially. The improved multispectral imaging system is preceded by an afocal telescope and includes a filter wheel positioned so that its rotation brings each filter, in its turn, into the exit pupil of the telescope. The filter wheel contains an even number of narrow-band-pass filters separated by narrow, spoke-like opaque sectors. The geometric width of each filter exceeds the cross-sectional width of the light beam coming out of the telescope. The light transmitted by the sequence of narrow-band filters is incident on a dichroic beam splitter that reflects in a broad shorter-wavelength spectral band that contains half of the narrow bands and transmits in a broad longer-wavelength spectral band that contains the other half of the narrow spectral bands. The filters are arranged on the wheel so that if the pass band of a given filter is in the reflection band of the dichroic beam splitter, then the pass band of the adjacent filter

  1. Analysis of an interferometric Stokes imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Murali, Sukumar

    Estimation of Stokes vector components from an interferometric fringe encoded image is a novel way of measuring the State Of Polarization (SOP) distribution across a scene. Imaging polarimeters employing interferometric techniques encode SOP in- formation across a scene in a single image in the form of intensity fringes. The lack of moving parts and use of a single image eliminates the problems of conventional polarimetry - vibration, spurious signal generation due to artifacts, beam wander, and need for registration routines. However, interferometric polarimeters are limited by narrow bandpass and short exposure time operations which decrease the Signal to Noise Ratio (SNR) defined as the ratio of the mean photon count to the standard deviation in the detected image. A simulation environment for designing an Interferometric Stokes Imaging polarimeter (ISIP) and a detector with noise effects is created and presented. Users of this environment are capable of imaging an object with defined SOP through an ISIP onto a detector producing a digitized image output. The simulation also includes bandpass imaging capabilities, control of detector noise, and object brightness levels. The Stokes images are estimated from a fringe encoded image of a scene by means of a reconstructor algorithm. A spatial domain methodology involving the idea of a unit cell and slide approach is applied to the reconstructor model developed using Mueller calculus. The validation of this methodology and effectiveness compared to a discrete approach is demonstrated with suitable examples. The pixel size required to sample the fringes and minimum unit cell size required for reconstruction are investigated using condition numbers. The importance of the PSF of fore-optics (telescope) used in imaging the object is investigated and analyzed using a point source imaging example and a Nyquist criteria is presented. Reconstruction of fringe modulated images in the presence of noise involves choosing an

  2. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  3. Improving depth resolution in digital breast tomosynthesis by iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Roth, Erin G.; Kraemer, David N.; Sidky, Emil Y.; Reiser, Ingrid S.; Pan, Xiaochuan

    2015-03-01

    Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.

  4. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  5. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  6. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  7. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  8. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  9. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  10. Methods of Improving a Digital Image Having White Zones

    NASA Technical Reports Server (NTRS)

    Wodell, Glenn A. (Inventor); Jobson, Daniel J. (Inventor); Rahman, Zia-Ur (Inventor)

    2005-01-01

    The present invention is a method of processing a digital image that is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I,(x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with SIGMA (sup N)(sub n=1)W(sub n)(log I(sub i)(x,y)-log[I(sub i)(x,y)*F(sub n)(x,y)]), i = 1,...,S where W(sub n) is a weighting factor, "*" is the convolution operator and S is the total number of unique spectral bands. For each n, the function F(sub n)(x,y) is a unique surround function applied to each position (x,y) and N is the total number of unique surround functions. Each unique surround function is scaled to improve some aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band of the image is then filtered with a filter function to generate a filtered intensity value R(sub i)(x,y). To Prevent graying of white zones in the image, the maximum of the original intensity value I(sub i)(x,y) and filtered intensity value R(sub i)(x,y) is selected for display.

  11. Improved SOT (Hinode mission) high resolution solar imaging observations

    NASA Astrophysics Data System (ADS)

    Goodarzi, H.; Koutchmy, S.; Adjabshirizadeh, A.

    2015-08-01

    We consider the best today available observations of the Sun free of turbulent Earth atmospheric effects, taken with the Solar Optical Telescope (SOT) onboard the Hinode spacecraft. Both the instrumental smearing and the observed stray light are analyzed in order to improve the resolution. The Point Spread Function (PSF) corresponding to the blue continuum Broadband Filter Imager (BFI) near 450 nm is deduced by analyzing (i) the limb of the Sun and (ii) images taken during the transit of the planet Venus in 2012. A combination of Gaussian and Lorentzian functions is selected to construct a PSF in order to remove both smearing due to the instrumental diffraction effects (PSF core) and the large-angle stray light due to the spiders and central obscuration (wings of the PSF) that are responsible for the parasitic stray light. A Max-likelihood deconvolution procedure based on an optimum number of iterations is discussed. It is applied to several solar field images, including the granulation near the limb. The normal non-magnetic granulation is compared to the abnormal granulation which we call magnetic. A new feature appearing for the first time at the extreme- limb of the disk (the last 100 km) is discussed in the context of the definition of the solar edge and of the solar diameter. A single sunspot is considered in order to illustrate how effectively the restoration works on the sunspot core. A set of 125 consecutive deconvolved images is assembled in a 45 min long movie illustrating the complexity of the dynamical behavior inside and around the sunspot.

  12. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  13. Improving sensitivity in nonlinear Raman microspectroscopy imaging and sensing

    PubMed Central

    Arora, Rajan; Petrov, Georgi I.; Liu, Jian; Yakovlev, Vladislav V.

    2011-01-01

    Nonlinear Raman microspectroscopy based on a broadband coherent anti-Stokes Raman scattering is an emerging technique for noninvasive, chemically specific, microscopic analysis of tissues and large population of cells and particles. The sensitivity of this imaging is a critical aspect of a number of the proposed biomedical application. It is shown that the incident laser power is the major parameter controlling this sensitivity. By careful optimizing the laser system, the high-quality vibrational spectra acquisition at the multi-kHz rate becomes feasible. PMID:21361677

  14. Morphometry of spermatozoa using semiautomatic image analysis.

    PubMed Central

    Jagoe, J R; Washbrook, N P; Hudson, E A

    1986-01-01

    Human sperm heads were detected and tracked using semiautomatic image analysis. Measurements of size and shape on two specimens from each of 26 men showed that the major component of variability both within and between subjects was the number of small elongated sperm heads. Variability of the computed features between subjects was greater than that between samples from the same subject. PMID:3805320

  15. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  16. Expert system for imaging spectrometer analysis results

    NASA Technical Reports Server (NTRS)

    Borchardt, Gary C.

    1985-01-01

    Information on an expert system for imaging spectrometer analysis results is outlined. Implementation requirements, the Simple Tool for Automated Reasoning (STAR) program that provides a software environment for the development and operation of rule-based expert systems, STAR data structures, and rule-based identification of surface materials are among the topics outlined.

  17. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  18. Improved land cover mapping using aerial photographs and satellite images

    NASA Astrophysics Data System (ADS)

    Varga, Katalin; Szabó, Szilárd; Szabó, Gergely; Dévai, György; Tóthmérész, Béla

    2014-10-01

    Manual Land Cover Mapping using aerial photographs provides sufficient level of resolution for detailed vegetation or land cover maps. However, in some cases it is not possible to achieve the desired information over large areas, for example from historical data where the quality and amount of available images is definitely lower than from modern data. The use of automated and semiautomated methods offers the means to identify the vegetation cover using remotely sensed data. In this paper automated methods were tested on aerial photographs and satellite images to extract better and more reliable information about vegetation cover. These testswere performed by using automated analysis of LANDSAT7 images (with and without the surface model of the Shuttle Radar Topography Mission (SRTM)) and two temporally similar aerial photographs. The spectral bands were analyzed with supervised (maximum likelihood) methods. In conclusion, the SRTM and the combination of two temporally similar aerial photographs from earlier years were useful in separating the vegetation cover on a floodplain area. In addition the different date of the vegetation season also gave reliable information about the land cover. High quality information about old and present vegetation on a large area is an essential prerequisites ensuring the conservation of ecosystems

  19. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  20. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  1. Novel driver method to improve ordinary CCD frame rate for high-speed imaging diagnosis

    NASA Astrophysics Data System (ADS)

    Luo, Tong-Ding; Li, Bin-Kang; Yang, Shao-Hua; Guo, Ming-An; Yan, Ming

    2016-06-01

    The use of ordinary Charge-coupled-Device (CCD) imagers for the analysis of fast physical phenomenon is restricted because of the low-speed performance resulting from their long output times. Even though the form of Intensified-CCD (ICCD), coupled with a gated image intensifier, has extended their use for high speed imaging, the deficiency remains to be solved that ICDD could record only one image in a single shot. This paper presents a novel driver method designed to significantly improve the ordinary interline CCD burst frame rate for high-speed photography. This method is based on the use of vertical registers as storage, so that a small number of additional frames comprised of reduced-spatial-resolution images obtained via a specific sampling operation can be buffered. Hence, the interval time of the received series of images is related to the exposure and vertical transfer times only and, thus, the burst frame rate can be increased significantly. A prototype camera based on this method is designed as part of this study, exhibiting a burst rate of up to 250,000 frames per second (fps) and a capacity to record three continuous images. This device exhibits a speed enhancement of approximately 16,000 times compared with the conventional speed, with a spatial resolution reduction of only 1/4.

  2. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  3. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  4. Improved site contamination through time-lapse complex resistivity imaging

    NASA Astrophysics Data System (ADS)

    Flores Orozco, Adrian; Kemna, Andreas; Cassiani, Giorgio; Binley, Andrew

    2016-04-01

    In the framework of the EU FP7 project ModelPROBE, time-lapse complex resistivity (CR) measurements were conducted at a test site close to Trecate (NW Italy). The objective was to investigate the capabilities of the CR imaging method to delineate the geometry and dynamics of subsurface hydrocarbon contaminant plume which resulted from a crude oil spill in 1994. To achieve this it is required to discriminate the electrical signal associated to static (i.e., lithology) from dynamic changes in the subsurface, with the latter associated to significant seasonal groundwater fluctuations. Previous studies have demonstrated the benefits of the CR method to gain information which is not accessible with common electrical resistivity tomography. However field applications are still rarely and neither the analysis of the data error for CR time-lapse measurements, nor the inversion itself haven not received enough attention. While the ultimate objective at the site is to characterize, here we address the discrimination of the lithological and hydrological controls on the IP response by considering data collected in an uncontaminated area of the site. In this study we demonstrate that an adequate error description of CR measurements provides images free of artifacts and quantitative superior than previous approaches. Based on this approach, differential images computed for time-lapse data exhibited anomalies well correlated with spatiotemporal changes correlated to seasonal fluctuations in the groundwater level. The proposed analysis may be useful in the characterization of fate and transport of hydrocarbon contaminants relevant for the site, which presents areas contaminated with crude oil.

  5. Multiscale likelihood analysis and image reconstruction

    NASA Astrophysics Data System (ADS)

    Willett, Rebecca M.; Nowak, Robert D.

    2003-11-01

    The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.

  6. Three-dimensional visualization of objects in scattering medium using integral imaging and spectral analysis

    NASA Astrophysics Data System (ADS)

    Lee, Yeonkyung; Yoo, Hoon

    2016-02-01

    This paper presents a three-dimensional visualization method of 3D objects in a scattering medium. The proposed method employs integral imaging and spectral analysis to improve the visual quality of 3D images. The images observed from 3D objects in the scattering medium such as turbid water suffer from image degradation due to scattering. The main reason is that the observed image signal is very weak compared with the scattering signal. Common image enhancement techniques including histogram equalization and contrast enhancement works improperly to overcome the problem. Thus, integral imaging that enables to integrate the weak signals from multiple images was discussed to improve image quality. In this paper, we apply spectral analysis to an integral imaging system such as the computational integral imaging reconstruction. Also, we introduce a signal model with a visibility parameter to analyze the scattering signal. The proposed method based on spectral analysis efficiently estimates the original signal and it is applied to elemental images. The visibility-enhanced elemental images are then used to reconstruct 3D images using a computational integral imaging reconstruction algorithm. To evaluate the proposed method, we perform the optical experiments for 3D objects in turbid water. The experimental results indicate that the proposed method outperforms the existing methods.

  7. Recent advances in morphological cell image analysis.

    PubMed

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  8. Similarity measures of full polarimetric SAR images fusion for improved SAR image matching

    NASA Astrophysics Data System (ADS)

    Ding, H.

    2015-06-01

    China's first airborne SAR mapping system (CASMSAR) developed by Chinese Academy of Surveying and Mapping can acquire high-resolution and full polarimetric (HH, HV, VH and VV) Synthetic aperture radar (SAR) data. It has the ability to acquire X-band full polarimetric SAR data at a resolution of 0.5m. However, the existence of speckles which is inherent in SAR imagery affects visual interpretation and image processing badly, and challenges the assumption that conjugate points appear similar to each other in matching processing. In addition, researches show that speckles are multiplicative speckles, and most similarity measures of SAR image matching are sensitive to them. Thus, matching outcomes of SAR images acquired by most similarity measures are not reliable and with bad accuracy. Meanwhile, every polarimetric SAR image has different backscattering information of objects from each other and four polarimetric SAR data contain most basic and a large amount of redundancy information to improve matching. Therefore, we introduced logarithmically transformation and a stereo matching similarity measure into airborne full polarimetric SAR imagery. Firstly, in order to transform the multiplicative speckles into additivity ones and weaken speckles' influence on similarity measure, logarithmically transformation have to be taken to all images. Secondly, to prevent performance degradation of similarity measure caused by speckles, measure must be free or insensitive of additivity speckles. Thus, we introduced a stereo matching similarity measure, called Normalized Cross-Correlation (NCC), into full polarimetric SAR image matching. Thirdly, to take advantage of multi-polarimetric data and preserve the best similarity measure value, four measure values calculated between left and right single polarimetric SAR images are fused as final measure value for matching. The method was tested for matching under CASMSAR data. The results showed that the method delivered an effective

  9. Improving Intelligence Analysis With Decision Science.

    PubMed

    Dhami, Mandeep K; Mandel, David R; Mellers, Barbara A; Tetlock, Philip E

    2015-11-01

    Intelligence analysis plays a vital role in policy decision making. Key functions of intelligence analysis include accurately forecasting significant events, appropriately characterizing the uncertainties inherent in such forecasts, and effectively communicating those probabilistic forecasts to stakeholders. We review decision research on probabilistic forecasting and uncertainty communication, drawing attention to findings that could be used to reform intelligence processes and contribute to more effective intelligence oversight. We recommend that the intelligence community (IC) regularly and quantitatively monitor its forecasting accuracy to better understand how well it is achieving its functions. We also recommend that the IC use decision science to improve these functions (namely, forecasting and communication of intelligence estimates made under conditions of uncertainty). In the case of forecasting, decision research offers suggestions for improvement that involve interventions on data (e.g., transforming forecasts to debias them) and behavior (e.g., via selection, training, and effective team structuring). In the case of uncertainty communication, the literature suggests that current intelligence procedures, which emphasize the use of verbal probabilities, are ineffective. The IC should, therefore, leverage research that points to ways in which verbal probability use may be improved as well as exploring the use of numerical probabilities wherever feasible. PMID:26581731

  10. Analysis of imaging system performance capabilities

    NASA Astrophysics Data System (ADS)

    Haim, Harel; Marom, Emanuel

    2013-06-01

    Present performance analysis of optical imaging systems based on results obtained with classic one-dimensional (1D) resolution targets (such as the USAF resolution chart) are significantly different than those obtained with a newly proposed 2D target [1]. We hereby prove such claim and show how the novel 2D target should be used for correct characterization of optical imaging systems in terms of resolution and contrast. We apply thereafter the consequences of these observations on the optimal design of some two-dimensional barcode structures.

  11. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, S.W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution. 2 figs.

  12. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, Stanley W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution.

  13. Bispecific Antibody Pretargeting for Improving Cancer Imaging and Therapy

    SciTech Connect

    Sharkey, Robert M.

    2005-02-04

    The main objective of this project was to evaluate pretargeting systems that use a bispecific antibody (bsMAb) to improve the detection and treatment of cancer. A bsMAb has specificity to a tumor antigen, which is used to bind the tumor, while the other specificity is to a peptide that can be radiolabeled. Pretargeting is the process by which the unlabeled bsMAb is given first, and after a sufficient time (1-2 days) is given for it to localize in the tumor and clear from the blood, a small molecular weight radiolabeled peptide is given. According to a dynamic imaging study using a 99mTc-labeled peptide, the radiolabeled peptide localizes in the tumor in less than 1 hour, with > 80% of it clearing from the blood and body within this same time. Tumor/nontumor targeting ratios that are nearly 50 times better than that with a directly radiolabeled Fab fragment have been observed (Sharkey et al., ''Signal amplification in molecular imaging by a multivalent bispecific nanobody'' submitted). The bsMAbs used in this project have been composed of 3 antibodies that will target antigens found in colorectal and pancreatic cancers (CEA, CSAp, and MUC1). For the ''peptide binding moiety'' of the bsMAb, we initially examined an antibody directed to DOTA, but subsequently focused on another antibody directed against a novel compound, HSG (histamine-succinyl-glycine).

  14. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-06-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarisation mathcal {P}, this approach fails to properly account for the complex vector nature resulting in a process which is dependant on the axis under which the deconvolution is performed. We present here an improved method, `Generalised Complex CLEAN', which properly accounts for the complex vector nature of polarised emission and is invariant under rotations of the deconvolution axis. We use two Australia Telescope Compact Array datasets to test standard and complex CLEAN versions of the Högbom and SDI CLEAN algorithms. We show that in general the Complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular we find that the Complex SDI CLEAN produces the best results for diffuse polarised sources as compared with standard CLEAN algorithms and other Complex CLEAN algorithms. Given the move to widefield, high resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalised Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of a SDI CLEAN should be used.

  15. Gun bore flaw image matching based on improved SIFT descriptor

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  16. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  17. Co-registration of ultrasound and frequency-domain photoacoustic radar images and image improvement for tumor detection

    NASA Astrophysics Data System (ADS)

    Dovlo, Edem; Lashkari, Bahman; Choi, Sung soo Sean; Mandelis, Andreas

    2015-03-01

    This paper demonstrates the co-registration of ultrasound (US) and frequency domain photoacoustic radar (FD-PAR) images with significant image improvement from applying image normalization, filtering and amplification techniques. Achieving PA imaging functionality on a commercial Ultrasound instrument could accelerate clinical acceptance and use. Experimental results presented demonstrate live animal testing and show enhancements in signal-to-noise ratio (SNR), contrast and spatial resolution. The co-registered image produced from the US and phase PA images, provides more information than both images independently.

  18. Autonomous Image Analysis for Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.

    1999-01-01

    To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images

  19. Multi-Scale Fusion for Improved Localization of Malicious Tampering in Digital Images.

    PubMed

    Korus, Paweł; Huang, Jiwu

    2016-03-01

    A sliding window-based analysis is a prevailing mechanism for tampering localization in passive image authentication. It uses existing forensic detectors, originally designed for a full-frame analysis, to obtain the detection scores for individual image regions. One of the main problems with a window-based analysis is its impractically low localization resolution stemming from the need to use relatively large analysis windows. While decreasing the window size can improve the localization resolution, the classification results tend to become unreliable due to insufficient statistics about the relevant forensic features. In this paper, we investigate a multi-scale analysis approach that fuses multiple candidate tampering maps, resulting from the analysis with different windows, to obtain a single, more reliable tampering map with better localization resolution. We propose three different techniques for multi-scale fusion, and verify their feasibility against various reference strategies. We consider a popular tampering scenario with mode-based first digit features to distinguish between singly and doubly compressed regions. Our results clearly indicate that the proposed fusion strategies can successfully combine the benefits of small-scale and large-scale analyses and improve the tampering localization performance. PMID:26800540

  20. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  1. Morphological analysis of infrared images for waterjets

    NASA Astrophysics Data System (ADS)

    Gong, Yuxin; Long, Aifang

    2013-03-01

    High-speed waterjet has been widely used in industries and been investigated as a model of free shearing turbulence. This paper presents an investigation involving the flow visualization of high speed water jet, the noise reduction of the raw thermogram using a high-pass morphological filter ? and a median filter; the image enhancement using white top-hat filter; and the image segmentation using the multiple thresholding method. The image processing results by the designed morphological filters, ? - top-hat, were proved being ideal for further quantitative and in-depth analysis and can be used as a new morphological filter bank that may be of general implications for the analogous work

  2. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox

    PubMed Central

    Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry

  3. Image analysis for beef quality prediction from serial scan ultrasound images

    NASA Astrophysics Data System (ADS)

    Zhang, Hui L.; Wilson, Doyle E.; Rouse, Gene H.; Izquierdo, Mercedes M.

    1995-01-01

    The prediction of intramuscular fat (or marbling) of live beef animals using serially scanned ultrasound images was studied in this paper. Image analysis, both in gray scale intensity domain and in frequency spectrum domain, were used to extract image features of tissue characters to get useful parameters for prediction models. One, 2 and 3 order multi-variable prediction models were developed from randomly selected data sets and tested using the remained data sets. The comparisons of prediction results between using serially scanned images and only final scanned ones showed good improvement of prediction accuracy. The correlation of predicted percent fat and actual percent fat increase from .68 to .80 and from .72 to .76 separately for two groups of data, the R squares increase from .65 to .68 and from .68 to .72, and the root of mean square errors decrease from 1.70 to 1.52 and from 1.22 to 1.12 separately. This study indicates that serially obtained ultrasound images from live beef animals have good potential for improving the prediction accuracy of percent fat.

  4. Fusion of multispectral and panchromatic images using improved GIHS and PCA mergers based on contourlet

    NASA Astrophysics Data System (ADS)

    Yang, Shuyuan; Zeng, Liang; Jiao, Licheng; Xiao, Jing

    2007-11-01

    Since Chavez proposed the highpass filtering procedure to fuse multispectral and panchromatic images, several fusion methods have been developed based on the same principle: to extract from the panchromatic image spatial detail information to later inject it into the multispectral one. In this paper, we present new fusion alternatives based on the same concept, using the multiresolution contourlet decomposition to execute the detail extraction phase and the generalized intensity-hue-saturation (GIHS) and principal component analysis (PCA) procedures to inject the spatial detail of the panchromatic image into the multispectral one. Experimental results show the new fusion method have better performance than GIHS, PCA, wavelet and the method of improved GIHS and PCA mergers based on wavelet decomposition.

  5. Guided Wave Annular Array Sensor Design for Improved Tomographic Imaging

    NASA Astrophysics Data System (ADS)

    Koduru, Jaya Prakash; Rose, Joseph L.

    2009-03-01

    Guided wave tomography for structural health monitoring is fast emerging as a reliable tool for the detection and monitoring of hotspots in a structure, for any defects arising from corrosion, crack growth etc. To date guided wave tomography has been successfully tested on aircraft wings, pipes, pipe elbows, and weld joints. Structures practically deployed are subjected to harsh environments like exposure to rain, changes in temperature and humidity. A reliable tomography system should take into account these environmental factors to avoid false alarms. The lack of mode control with piezoceramic disk sensors makes it very sensitive to traces of water leading to false alarms. In this study we explore the design of annular array sensors to provide mode control for improved structural tomography, in particular, addressing the false alarm potential of water loading. Clearly defined actuation lines in the phase velocity dispersion curve space are calculated. A dominant in-plane displacement point is found to provide a solution to the water loading problem. The improvement in the tomographic images with the annular array sensors in the presence of water traces is clearly illustrated with a series of experiments. An annular array design philosophy for other problems in NDE/SHM is also discussed.

  6. Improved Signal Chains for Readout of CMOS Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Hancock, Bruce; Cunningham, Thomas

    2009-01-01

    An improved generic design has been devised for implementing signal chains involved in readout from complementary metal oxide/semiconductor (CMOS) image sensors and for other readout integrated circuits (ICs) that perform equivalent functions. The design applies to any such IC in which output signal charges from the pixels in a given row are transferred simultaneously into sampling capacitors at the bottoms of the columns, then voltages representing individual pixel charges are read out in sequence by sequentially turning on column-selecting field-effect transistors (FETs) in synchronism with source-follower- or operational-amplifier-based amplifier circuits. The improved design affords the best features of prior source-follower-and operational- amplifier-based designs while overcoming the major limitations of those designs. The limitations can be summarized as follows: a) For a source-follower-based signal chain, the ohmic voltage drop associated with DC bias current flowing through the column-selection FET causes unacceptable voltage offset, nonlinearity, and reduced small-signal gain. b) For an operational-amplifier-based signal chain, the required bias current and the output noise increase superlinearly with size of the pixel array because of a corresponding increase in the effective capacitance of the row bus used to couple the sampled column charges to the operational amplifier. The effect of the bus capacitance is to simultaneously slow down the readout circuit and increase noise through the Miller effect.

  7. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  8. Image analysis for measuring rod network properties

    NASA Astrophysics Data System (ADS)

    Kim, Dongjae; Choi, Jungkyu; Nam, Jaewook

    2015-12-01

    In recent years, metallic nanowires have been attracting significant attention as next-generation flexible transparent conductive films. The performance of films depends on the network structure created by nanowires. Gaining an understanding of their structure, such as connectivity, coverage, and alignment of nanowires, requires the knowledge of individual nanowires inside the microscopic images taken from the film. Although nanowires are flexible up to a certain extent, they are usually depicted as rigid rods in many analysis and computational studies. Herein, we propose a simple and straightforward algorithm based on the filtering in the frequency domain for detecting the rod-shape objects inside binary images. The proposed algorithm uses a specially designed filter in the frequency domain to detect image segments, namely, the connected components aligned in a certain direction. Those components are post-processed to be combined under a given merging rule in a single rod object. In this study, the microscopic properties of the rod networks relevant to the analysis of nanowire networks were measured for investigating the opto-electric performance of transparent conductive films and their alignment distribution, length distribution, and area fraction. To verify and find the optimum parameters for the proposed algorithm, numerical experiments were performed on synthetic images with predefined properties. By selecting proper parameters, the algorithm was used to investigate silver nanowire transparent conductive films fabricated by the dip coating method.

  9. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  10. Multiresolution simulated annealing for brain image analysis

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Majcenic, Zoran

    1999-05-01

    Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.

  11. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1987-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficiently by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  12. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  13. Improving image quality in poor visibility conditions using a physical model for contrast degradation.

    PubMed

    Oakley, J P; Satherley, B L

    1998-01-01

    In daylight viewing conditions, image contrast is often significantly degraded by atmospheric aerosols such as haze and fog. This paper introduces a method for reducing this degradation in situations in which the scene geometry is known. Contrast is lost because light is scattered toward the sensor by the aerosol particles and because the light reflected by the terrain is attenuated by the aerosol. This degradation is approximately characterized by a simple, physically based model with three parameters. The method involves two steps: first, an inverse problem is solved in order to recover the three model parameters; then, for each pixel, the relative contributions of scattered and reflected flux are estimated. The estimated scatter contribution is simply subtracted from the pixel value and the remainder is scaled to compensate for aerosol attenuation. This paper describes the image processing algorithm and presents an analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image. This analysis shows that the SNR decreases exponentially with range. A temporal filter structure is proposed to solve this problem. Results are presented for two image sequences taken from an airborne camera in hazy conditions and one sequence in clear conditions. A satisfactory agreement between the model and the experimental data is shown for the haze conditions. A significant improvement in image quality is demonstrated when using the contrast enhancement algorithm in conjuction with a temporal filter. PMID:18267391

  14. Improved Liver Lesion Conspicuity With Iterative Reconstruction in Computed Tomography Imaging.

    PubMed

    Jensen, Kristin; Andersen, Hilde Kjernlie; Tingberg, Anders; Reisse, Claudius; Fosse, Erik; Martinsen, Anne Catrine T

    2016-01-01

    Studies on iterative reconstruction techniques on computed tomographic (CT) scanners show reduced noise and changed image texture. The purpose of this study was to address the possibility of dose reduction and improved conspicuity of lesions in a liver phantom for different iterative reconstruction algorithms. An anthropomorphic upper abdomen phantom, specially designed for receiver operating characteristic analysis was scanned with 2 different CT models from the same vendor, GE CT750 HD and GE Lightspeed VCT. Images were obtained at 3 dose levels, 5, 10, and 15mGy, and reconstructed with filtered back projection (FBP), and 2 different iterative reconstruction algorithms; adaptive statistical iterative reconstruction and Veo. Overall, 5 interpreters evaluated the images and receiver operating characteristic analysis was performed. Standard deviation and the contrast to noise ratio were measured. Veo image reconstruction resulted in larger area under curves compared with those adaptive statistical iterative reconstruction and FBP image reconstruction for given dose levels. For the CT750 HD, iterative reconstruction at the 10mGy dose level resulted in larger or similar area under curves compared with FBP at the 15mGy dose level (0.88-0.95 vs 0.90). This was not shown for the Lightspeed VCT (0.83-0.85 vs 0.92). The results in this study indicate that the possibility for radiation dose reduction using iterative reconstruction techniques depends on both reconstruction technique and the CT scanner model used. PMID:26790606

  15. Improving Prediction of Prostate Cancer Recurrence using Chemical Imaging

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Kajdacsy-Balla, André; Macias, Virgilia; Walsh, Michael; Sinha, Saurabh; Bhargava, Rohit

    2015-03-01

    Precise Outcome prediction is crucial to providing optimal cancer care across the spectrum of solid cancers. Clinically-useful tools to predict risk of adverse events (metastases, recurrence), however, remain deficient. Here, we report an approach to predict the risk of prostate cancer recurrence, at the time of initial diagnosis, using a combination of emerging chemical imaging, a diagnostic protocol that focuses simultaneously on the tumor and its microenvironment, and data analysis of frequent patterns in molecular expression. Fourier transform infrared (FT-IR) spectroscopic imaging was employed to record the structure and molecular content from tumors prostatectomy. We analyzed data from a patient cohort that is mid-grade dominant - which is the largest cohort of patients in the modern era and in whom prognostic methods are largely ineffective. Our approach outperforms the two widely used tools, Kattan nomogram and CAPRA-S score in a head-to-head comparison for predicting risk of recurrence. Importantly, the approach provides a histologic basis to the prediction that identifies chemical and morphologic features in the tumor microenvironment that is independent of conventional clinical information, opening the door to similar advances in other solid tumors.

  16. Compact Microscope Imaging System With Intelligent Controls Improved

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The Compact Microscope Imaging System (CMIS) with intelligent controls is a diagnostic microscope analysis tool with intelligent controls for use in space, industrial, medical, and security applications. This compact miniature microscope, which can perform tasks usually reserved for conventional microscopes, has unique advantages in the fields of microscopy, biomedical research, inline process inspection, and space science. Its unique approach integrates a machine vision technique with an instrumentation and control technique that provides intelligence via the use of adaptive neural networks. The CMIS system was developed at the NASA Glenn Research Center specifically for interface detection used for colloid hard spheres experiments; biological cell detection for patch clamping, cell movement, and tracking; and detection of anode and cathode defects for laboratory samples using microscope technology.

  17. Discriminating enumeration of subseafloor life using automated fluorescent image analysis

    NASA Astrophysics Data System (ADS)

    Morono, Y.; Terada, T.; Masui, N.; Inagaki, F.

    2008-12-01

    Enumeration of microbial cells in marine subsurface sediments has provided fundamental information for understanding the extent of life and deep-biosphere on Earth. The microbial population has been manually evaluated by direct cell count under the microscopy because the recognition of cell-derived fluorescent signals has been extremely difficult. Here, we improved the conventional method by removing the non- specific fluorescent backgrounds and enumerated the cell population in sediments using a newly developed automated microscopic imaging system. Although SYBR Green I is known to specifically bind to the double strand DNA (Lunau et al., 2005), we still observed some SYBR-stainable particulate matters (SYBR-SPAMs) in the heat-sterilized control sediments (450°C, 6h), which assumed to be silicates or mineralized organic matters. Newly developed acid-wash treatments with hydrofluoric acid (HF) followed by image analysis successfully removed these background objects and yielded artifact-free microscopic images. To obtain statistically meaningful fluorescent images, we constructed a computer-assisted automated cell counting system. Given the comparative data set of cell abundance in acid-washed marine sediments evaluated by SYBR Green I- and acridine orange (AO)-stain with and without the image analysis, our protocol could provide the statistically meaningful absolute numbers of discriminating cell-derived fluorescent signals.

  18. A novel coded excitation scheme to improve spatial and contrast resolution of quantitative ultrasound imaging.

    PubMed

    Sanchez, Jose R; Pocci, Darren; Oelze, Michael L

    2009-10-01

    Quantitative ultrasound (QUS) imaging techniques based on ultrasonic backscatter have been used successfully to diagnose and monitor disease. A method for improving the contrast and axial resolution of QUS parametric images by using the resolution enhancement compression (REC) technique is proposed. Resolution enhancement compression is a coded excitation and pulse compression technique that enhances the -6-dB bandwidth of an ultrasonic imaging system. The objective of this study was to combine REC with QUS (REC-QUS) and evaluate and compare improvements in scatterer diameter estimates obtained using the REC technique to conventional pulsing methods. Simulations and experimental measurements were conducted with a single-element transducer (f/4) having a center frequency of 10 MHz and a -6-dB bandwidth of 80%. Using REC, the -6-dB bandwidth was enhanced to 155%. Images for both simulation and experimental measurements contained a signal-to-noise ratio of 28 dB. In simulations, to monitor the improvements in contrast a software phantom with a cylindrical lesion was evaluated. In experimental measurements, tissue-mimicking phantoms that contained glass spheres with different scatterer diameters were evaluated. Estimates of average scatterer diameter in the simulations and experiments were obtained by comparing the normalized backscattered power spectra to theory over the -6-dB bandwidth for both conventional pulsing and REC. Improvements in REC-QUS over conventional QUS were quantified through estimate bias and standard deviation, contrast-to-noise ratio, and histogram analysis of QUS parametric images. Overall, a 51% increase in contrast and a 60% decrease in the standard deviation of average scatterer diameter estimates were obtained during simulations, while a reduction of 34% to 71% was obtained in the standard deviation of average scatterer diameter for the experimental results. PMID:19942499

  19. Enhanced Processing and Analysis of Cassini SAR Images of Titan

    NASA Astrophysics Data System (ADS)

    Lucas, A.; Aharonson, O.; Hayes, A. G.; Deledalle, C. A.; Kirk, R. L.

    2011-12-01

    SAR images suffer from speckle noise, which hinders interpretation and quantitative analysis. We have adapted a non-local algorithm for de-noising images using an appropriate multiplicative noise model [1] for analysis of Cassini SAR images. We illustrate some examples here that demonstrate the improvement of landform interpretation by focusing on transport processes at Titan's surface. Interpretation of the geomorphic features is facilitated (Figure 1); including revealing details of the channels incised into the terrain, shoreline morphology, and contrast variations in the dark, liquid covered areas. The latter are suggestive of sub-marine channels and gradients in the bathymetry. Furthermore, substantial quantitative improvements are possible. We show that a derived Digital Elevation Model from radargrammetry [2] using the de-noised images is obtained with a greater number of matching points (up to 80%) and a better correlation (59% of the pixels give a good correlation in the de-noised data compared with 18% in the original SAR image). An elevation hypsogram of our enhanced DEM shows evidence that fluvial and/or lacustrine processes have affected the topographic distribution substantially. Dune wavelengths and interdune extents are more precisely measured. Finally, radarclinometry technics applied to our new data are more accurate in dunes and mountainous regions. [1] Deledalle C-A., et al., 2009, Weighted maximum likelihood denoising with iterative and probabilistic patch-based weights, Telecom Paris. [2] Kirk, R.L., et al., 2007, First stereoscopic radar images of Titan, Lunar Planet. Sci., XXXVIII, Abstract #1427, Lunar and Planetary Institute, Houston

  20. Imaging Brain Dynamics Using Independent Component Analysis

    PubMed Central

    Jung, Tzyy-Ping; Makeig, Scott; McKeown, Martin J.; Bell, Anthony J.; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving useful for analyzing functional magnetic resonance brain imaging (fMRI) data. In this paper, we outline the assumptions underlying ICA and demonstrate its application to a variety of electrical and hemodynamic recordings from the human brain. PMID:20824156

  1. Theoretical analysis of multispectral image segmentation criteria.

    PubMed

    Kerfoot, I B; Bresler, Y

    1999-01-01

    Markov random field (MRF) image segmentation algorithms have been extensively studied, and have gained wide acceptance. However, almost all of the work on them has been experimental. This provides a good understanding of the performance of existing algorithms, but not a unified explanation of the significance of each component. To address this issue, we present a theoretical analysis of several MRF image segmentation criteria. Standard methods of signal detection and estimation are used in the theoretical analysis, which quantitatively predicts the performance at realistic noise levels. The analysis is decoupled into the problems of false alarm rate, parameter selection (Neyman-Pearson and receiver operating characteristics), detection threshold, expected a priori boundary roughness, and supervision. Only the performance inherent to a criterion, with perfect global optimization, is considered. The analysis indicates that boundary and region penalties are very useful, while distinct-mean penalties are of questionable merit. Region penalties are far more important for multispectral segmentation than for greyscale. This observation also holds for Gauss-Markov random fields, and for many separable within-class PDFs. To validate the analysis, we present optimization algorithms for several criteria. Theoretical and experimental results agree fairly well. PMID:18267494

  2. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  3. Improving lateral resolution of optical coherence tomography for imaging of skins

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Lu, Hui; Wang, Michael R.

    2016-03-01

    We report on improving lateral resolution of optical coherence tomography (OCT) for imaging of skins using multiframe superresolution technique. Through introduction of suitable slight transverse positional shifts among a series of C-scans, the superresolution processing of the lateral low resolution images at each axial depth reconstructs a high resolution image. Superresolution processing of all depth layers yields a high resolution 3D image. Using known resolution photomasks, 3 times lateral resolution improvement has been confirmed for both low and high numerical aperture OCT imaging. The superresolution processed OCT 3D skin image provides much more feature details for all subsurface depth layers within the OCT axial imaging range.

  4. Improving the performance of lysimeters with thermal imaging

    NASA Astrophysics Data System (ADS)

    Voortman, Bernard; Bartholomeus, Ruud; Witte, Jan-Philip

    2014-05-01

    of lysimeters can be monitored and improved with the aid of thermal imaging. For example, estimated ET based on thermal images of moss lysimeters showed a model efficiency (Nash Sutcliffe) of 0.94 and 0.93 and a RMSE of 0.02 and 0.03 mm for method 1 and 2 respectively. However, for bare sand the model efficiency was much lower: 0.54 and 0.59 with a RMSE of 0.53 and 0.60 mm for method 1 and 2 respectively. This poor performance is caused by the large variation in albedo of bare sand under moist and dry conditions, affecting the available energy for evaporation.

  5. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    PubMed

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation. PMID:27125557

  6. Multispectral laser imaging for advanced food analysis

    NASA Astrophysics Data System (ADS)

    Senni, L.; Burrascano, P.; Ricci, M.

    2016-07-01

    A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.

  7. Accuracy Improvement by the Least Squares Image Matching Evaluated on the CARTOSAT-1

    NASA Astrophysics Data System (ADS)

    Afsharnia, H.; Azizi, A.; Arefi, H.

    2015-12-01

    Generating accurate elevation data from satellite images is a prerequisite step for applications that involve disaster forecasting and management using GIS platforms. In this respect, the high resolution satellite optical sensors may be regarded as one of the prime and valuable sources for generating accurate and updated elevation information. However, one of the main drawbacks of conventional approaches for automatic elevation generation from these satellite optical data using image matching techniques is the lack of flexibility in the image matching functional models to take dynamically into account the geometric and radiometric dissimilarities between the homologue stereo image points. The classical least squares image matching (LSM) method, on the other hand, is quite flexible in incorporating the geometric and radiometric variations of image pairs into its functional model. The main objective of this paper is to evaluate and compare the potential of the LSM technique for generating disparity maps from high resolution satellite images to achieve sub pixel precision. To evaluate the rate of success of the LSM, the size of the y-disparities between the homologous points is taken as the precision criteria. The evaluation is performed on the Cartosat-1 stereo along track images over a highly mountainous terrain. The precision improvement is judged based on the standard deviation and the scatter pattern of the y-disparity data. The analysis of the results indicate that, the LSM has achieved the matching precision of about 0.18 pixels which is clearly superior to the manual pointing that yielded the precision of 0.37 pixels.

  8. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  9. An Analysis of Web Image Queries for Search.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh

    2003-01-01

    Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)

  10. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession. PMID:23343236

  11. Shannon information and ROC analysis in imaging.

    PubMed

    Clarkson, Eric; Cushing, Johnathan B

    2015-07-01

    Shannon information (SI) and the ideal-observer receiver operating characteristic (ROC) curve are two different methods for analyzing the performance of an imaging system for a binary classification task, such as the detection of a variable signal embedded within a random background. In this work we describe a new ROC curve, the Shannon information receiver operator curve (SIROC), that is derived from the SI expression for a binary classification task. We then show that the ideal-observer ROC curve and the SIROC have many properties in common, and are equivalent descriptions of the optimal performance of an observer on the task. This equivalence is described mathematically by an integral transform that maps the ideal-observer ROC curve onto the SIROC. This then leads to an integral transform relating the minimum probability of error, as a function of the odds against a signal, to the conditional entropy, as a function of the same variable. This last relation then gives us the complete mathematical equivalence between ideal-observer ROC analysis and SI analysis of the classification task for a given imaging system. We also find that there is a close relationship between the area under the ideal-observer ROC curve, which is often used as a figure of merit for imaging systems and the area under the SIROC. Finally, we show that the relationships between the two curves result in new inequalities relating SI to ROC quantities for the ideal observer. PMID:26367158

  12. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  13. Improved fusing infrared and electro-optic signals for high-resolution night images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor

    2012-06-01

    Electro-optic (EO) images exhibit the properties of high resolution and low noise level, while it is a challenge to distinguish objects with infrared (IR), especially for objects with similar temperatures. In earlier work, we proposed a novel framework for IR image enhancement based on the information (e.g., edge) from EO images. Our framework superimposed the detected edges of the EO image with the corresponding transformed IR image. Obviously, this framework resulted in better resolution IR images that help distinguish objects at night. For our IR image system, we used the theoretical point spread function (PSF) proposed by Russell C. Hardie et al., which is composed of the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we designed an inverse filter based on the proposed PSF to transform the IR image. In this paper, blending the detected edge of the EO image with the corresponding transformed IR image and the original IR image is the principal idea for improving the previous framework. This improved framework requires four main steps: (1) inverse filter-based IR image transformation, (2) image edge detection, (3) images registration, and (4) blending of the corresponding images. Simulation results show that blended IR images have better quality over the superimposed images that were generated under the previous framework. Based on the same steps, the simulation result shows a blended IR image of better quality when only the original IR image is available.

  14. Improving GPR image resolution in lossy ground using dispersive migration

    USGS Publications Warehouse

    Oden, C.P.; Powers, M.H.; Wright, D.L.; Olhoeft, G.R.

    2007-01-01

    As a compact wave packet travels through a dispersive medium, it becomes dilated and distorted. As a result, ground-penetrating radar (GPR) surveys over conductive and/or lossy soils often result in poor image resolution. A dispersive migration method is presented that combines an inverse dispersion filter with frequency-domain migration. The method requires a fully characterized GPR system including the antenna response, which is a function of the local soil properties for ground-coupled antennas. The GPR system response spectrum is used to stabilize the inverse dispersion filter. Dispersive migration restores attenuated spectral components when the signal-to-noise ratio is adequate. Applying the algorithm to simulated data shows that the improved spatial resolution is significant when data are acquired with a GPR system having 120 dB or more of dynamic range, and when the medium has a loss tangent of 0.3 or more. Results also show that dispersive migration provides no significant advantage over conventional migration when the loss tangent is less than 0.3, or when using a GPR system with a small dynamic range. ?? 2007 IEEE.

  15. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  16. An improved MR sequence for attenuation correction in PET/MR hybrid imaging.

    PubMed

    Sagiyama, Koji; Watanabe, Yuji; Kamei, Ryotaro; Shinyama, Daiki; Baba, Shingo; Honda, Hiroshi

    2016-04-01

    The aim of this study was to investigate the effects of MR parameters on tissue segmentation and determine the optimal MR sequence for attenuation correction in PET/MR hybrid imaging. Eight healthy volunteers were examined using a PET/MR hybrid scanner with six three-dimensional turbo-field-echo sequences for attenuation correction by modifying the echo time, k-space trajectory in the phase-encoding direction, and image contrast. MR images for attenuation correction were obtained from six MR sequences in each session; each volunteer underwent four sessions. Two radiologists assessed the attenuation correction maps generated from the MR images with respect to segmentation errors and ghost artifacts on a five-point scale, and the scores were decided by consensus. Segmentation accuracy and reproducibility were compared. Multiple regression analysis was performed to determine the effects of each MR parameter. The two three-dimensional turbo-field-echo sequences with an in-phase echo time and radial k-space sampling showed the highest total scores for segmentation accuracy, with a high reproducibility. In multiple regression analysis, the score with the shortest echo time (-3.44, P<0.0001) and Cartesian sampling in the anterior/posterior phase-encoding direction (-2.72, P=0.002) was significantly lower than that with in-phase echo time and Cartesian sampling in the right/left phase-encoding direction. Radial k-space sampling provided a significantly higher score (+5.08, P<0.0001) compared with Cartesian sampling. Furthermore, radial sampling improved intrasubject variations in the segmentation score (-8.28%, P=0.002). Image contrast had no significant effect on the total score or reproducibility. These results suggest that three-dimensional turbo-field-echo MR sequences with an in-phase echo time and radial k-space sampling provide improved MR-based attenuation correction maps. PMID:26656909

  17. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  18. PAMS photo image retrieval prototype alternatives analysis

    SciTech Connect

    Conner, M.L.

    1996-04-30

    Photography and Audiovisual Services uses a system called the Photography and Audiovisual Management System (PAMS) to perform order entry and billing services. The PAMS system utilizes Revelation Technologies database management software, AREV. Work is currently in progress to link the PAMS AREV system to a Microsoft SQL Server database engine to provide photograph indexing and query capabilities. The link between AREV and SQLServer will use a technique called ``bonding.`` This photograph imaging subsystem will interface to the PAMS system and handle the image capture and retrieval portions of the project. The intent of this alternatives analysis is to examine the software and hardware alternatives available to meet the requirements for this project, and identify a cost-effective solution.

  19. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  20. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    SciTech Connect

    Drukker, Karen Giger, Maryellen L.; Li, Hui; Duewer, Fred; Malkov, Serghei; Joe, Bonnie; Kerlikowske, Karla; Shepherd, John A.; Flowers, Chris I.; Drukteinis, Jennifer S.

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CB alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.

  1. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  2. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  3. Bayesian Analysis Of HMI Solar Image Observables And Comparison To TSI Variations And MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.

    2014-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and

  4. Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.

    2015-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and

  5. Dose and detectability improvements with high energy phase sensitive x-ray imaging in comparison to low energy conventional imaging

    PubMed Central

    Wong, Molly Donovan; Yan, Aimin; Ghani, Muhammad; Li, Yuhua; Fajardo, Laurie; Wu, Xizeng; Liu, Hong

    2014-01-01

    The objective of this study was to demonstrate the potential benefits of using high energy x-rays for phase sensitive breast imaging through a comparison with conventional mammography imaging. We compared images of a contrast-detail (CD) phantom acquired on a prototype phase sensitive x-ray imaging system with images acquired on a commercial flat panel digital mammography unit. The phase contrast images were acquired using a micro-focus x-ray source with a 50 μm focal spot at 120 kVp and 4.5 mAs, with a magnification factor of 2.46 and a 50 μm pixel pitch. A phase attenuation duality (PAD)-based phase retrieval algorithm that requires only a single phase contrast image was applied. Conventional digital mammography images were acquired at 27 kVp, 131 mAs and 28 kVp, 54 mAs. For the same radiation dose, both the observer study and SNR/FOM comparisons indicated a large improvement by the phase retrieved image as compared to the clinical system for the larger disk sizes, but the improvement was not enough to detect the smallest disks. Compared to the double dose image acquired with the clinical system, the observer study also indicated that the phase retrieved image provided improved detection capabilities for all disk sizes except the smallest disks. Thus the SNR improvement provided by phase contrast imaging is not yet enough to offset the noise reduction provided by the clinical system at the doubled dose level. However, the potential demonstrated by this study for high energy phase sensitive x-ray imaging to improve lesion detection and reduce radiation dose in mammography warrants further investigation of this technique. PMID:24732108

  6. Dose and detectability improvements with high energy phase sensitive x-ray imaging in comparison to low energy conventional imaging

    NASA Astrophysics Data System (ADS)

    Donovan Wong, Molly; Yan, Aimin; Ghani, Muhammad; Li, Yuhua; Fajardo, Laurie; Wu, Xizeng; Liu, Hong

    2014-05-01

    The objective of this study was to demonstrate the potential benefits of using high energy x-rays for phase sensitive breast imaging through a comparison with conventional mammography imaging. We compared images of a contrast-detail phantom acquired on a prototype phase sensitive x-ray imaging system with images acquired on a commercial flat panel digital mammography unit. The phase contrast images were acquired using a micro-focus x-ray source with a 50 µm focal spot at 120 kVp and 4.5 mAs, with a magnification factor of 2.46 and a 50 µm pixel pitch. A phase attenuation duality-based phase retrieval algorithm that requires only a single phase contrast image was applied. Conventional digital mammography images were acquired at 27 kVp, 131 mAs and 28 kVp, 54 mAs. For the same radiation dose, both the observer study and signal-to-noise ratio (SNR)/figure of merit comparisons indicated a large improvement by the phase retrieved image as compared to the clinical system for the larger disc sizes, but the improvement was not enough to detect the smallest discs. Compared to the double dose image acquired with the clinical system, the observer study also indicated that the phase retrieved image provided improved detection capabilities for all disc sizes except the smallest discs. Thus the SNR improvement provided by phase contrast imaging is not yet enough to offset the noise reduction provided by the clinical system at the doubled dose level. However, the potential demonstrated by this study for high energy phase sensitive x-ray imaging to improve lesion detection and reduce radiation dose in mammography warrants further investigation of this technique.

  7. Productivity improvement through cycle time analysis

    NASA Astrophysics Data System (ADS)

    Bonal, Javier; Rios, Luis; Ortega, Carlos; Aparicio, Santiago; Fernandez, Manuel; Rosendo, Maria; Sanchez, Alejandro; Malvar, Sergio

    1996-09-01

    A cycle time (CT) reduction methodology has been developed in the Lucent Technology facility (former AT&T) in Madrid, Spain. It is based on a comparison of the contribution of each process step in each technology with a target generated by a cycle time model. These targeted cycle times are obtained using capacity data of the machines processing those steps, queuing theory and theory of constrains (TOC) principles (buffers to protect bottleneck and low cycle time/inventory everywhere else). Overall efficiency equipment (OEE) like analysis is done in the machine groups with major differences between their target cycle time and real values. Comparisons between the current value of the parameters that command their capacity (process times, availability, idles, reworks, etc.) and the engineering standards are done to detect the cause of exceeding their contribution to the cycle time. Several friendly and graphical tools have been developed to track and analyze those capacity parameters. Specially important have showed to be two tools: ASAP (analysis of scheduling, arrivals and performance) and performer which analyzes interrelation problems among machines procedures and direct labor. The performer is designed for a detailed and daily analysis of an isolate machine. The extensive use of this tool by the whole labor force has demonstrated impressive results in the elimination of multiple small inefficiencies with a direct positive implications on OEE. As for ASAP, it shows the lot in process/queue for different machines at the same time. ASAP is a powerful tool to analyze the product flow management and the assigned capacity for interdependent operations like the cleaning and the oxidation/diffusion. Additional tools have been developed to track, analyze and improve the process times and the availability.

  8. Analysis of katabatic flow using infrared imaging

    NASA Astrophysics Data System (ADS)

    Grudzielanek, M.; Cermak, J.

    2013-12-01

    We present a novel high-resolution IR method which is developed, tested and used for the analysis of katabatic flow. Modern thermal imaging systems allow for the recording of infrared picture sequences and thus the monitoring and analysis of dynamic processes. In order to identify, visualize and analyze dynamic air flow using infrared imaging, a highly reactive 'projection' surface is needed along the air flow. Here, a design for these types of analysis is proposed and evaluated. Air flow situations with strong air temperature gradients and fluctuations, such as katabatic flow, are particularly suitable for this new method. The method is applied here to analyze nocturnal cold air flows on gentle slopes. In combination with traditional methods the vertical and temporal dynamics of cold air flow are analyzed. Several assumptions on cold air flow dynamics can be confirmed explicitly for the first time. By observing the cold air flow in terms of frequency, size and period of the cold air fluctuations, drops are identified and organized in a newly derived classification system of cold air flow phases. In addition, new flow characteristics are detected, like sharp cold air caps and turbulence inside the drops. Vertical temperature gradients inside cold air drops and their temporal evolution are presented in high resolution Hovmöller-type diagrams and sequenced time lapse infrared videos.

  9. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  10. Self-assessed performance improves statistical fusion of image labels

    SciTech Connect

    Bryan, Frederick W. Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-03-15

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  11. Patient-adaptive lesion metabolism analysis by dynamic PET images.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2012-01-01

    Dynamic PET imaging provides important spatial-temporal information for metabolism analysis of organs and tissues, and generates a great reference for clinical diagnosis and pharmacokinetic analysis. Due to poor statistical properties of the measurement data in low count dynamic PET acquisition and disturbances from surrounding tissues, identifying small lesions inside the human body is still a challenging issue. The uncertainties in estimating the arterial input function will also limit the accuracy and reliability of the metabolism analysis of lesions. Furthermore, the sizes of the patients and the motions during PET acquisition will yield mismatch against general purpose reconstruction system matrix, this will also affect the quantitative accuracy of metabolism analyses of lesions. In this paper, we present a dynamic PET metabolism analysis framework by defining a patient adaptive system matrix to improve the lesion metabolism analysis. Both patient size information and potential small lesions are incorporated by simulations of phantoms of different sizes and individual point source responses. The new framework improves the quantitative accuracy of lesion metabolism analysis, and makes the lesion identification more precisely. The requirement of accurate input functions is also reduced. Experiments are conducted on Monte Carlo simulated data set for quantitative analysis and validation, and on real patient scans for assessment of clinical potential. PMID:23286175

  12. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  13. Sensitivity analysis of imaging geometries for prostate diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaodong; Zhu, Timothy C.

    2008-02-01

    Endoscopic and interstitial diffuse optical tomography have been studied in clinical investigations for imaging prostate tissues, yet, there is no comprehensive comparison of how these two imaging geometries affect the quality of the reconstruction images. In this study, the effect of imaging geometry is investigated by comparing the cross-section of the Jacobian sensitivity matrix and reconstructed images for three-dimensional mathematical phantoms. Next, the effect of source-detector configurations and number of measurements in both geometries is evaluated using singular value analysis. The amount of information contained for each source-detector configuration and different number of measurements are compared. Further, the effect of different measurements strategies for 3D endoscopic and interstitial tomography is examined. The pros and cons of using the in-plane measurements and off-plane measurements are discussed. Results showed that the reconstruction in the interstitial geometry outperforms the endoscopic geometry when deeper anomalies are present. Eight sources 8 detectors and 6 sources 12 detectors are sufficient for 2D reconstruction with endoscopic and interstitial geometry respectively. For a 3D problem, the quantitative accuracy in the interstitial geometry is significantly improved using off-plane measurements but only slightly in the endoscopic geometry.

  14. Exploring new quantitative CT image features to improve assessment of lung cancer prognosis

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Qian, Wei; Kang, Yan; Guan, Yubao; Lure, Fleming; Zheng, Bin

    2015-03-01

    Due to the promotion of lung cancer screening, more Stage I non-small-cell lung cancers (NSCLC) are currently detected, which usually have favorable prognosis. However, a high percentage of the patients have cancer recurrence after surgery, which reduces overall survival rate. To achieve optimal efficacy of treating and managing Stage I NSCLC patients, it is important to develop more accurate and reliable biomarkers or tools to predict cancer prognosis. The purpose of this study is to investigate a new quantitative image analysis method to predict the risk of lung cancer recurrence of Stage I NSCLC patients after the lung cancer surgery using the conventional chest computed tomography (CT) images and compare the prediction result with a popular genetic biomarker namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes. In this study, we developed and tested a new computer-aided detection (CAD) scheme to segment lung tumors and initially compute 35 tumor-related morphologic and texture features from CT images. By applying a machine learning based feature selection method, we identified a set of 8 effective and non-redundant image features. Using these features we trained a naïve Bayesian network based classifier to predict the risk of cancer recurrence. When applying to a test dataset with 79 Stage I NSCLC cases, the computed areas under ROC curves were 0.77±0.06 and 0.63±0.07 when using the quantitative image based classifier and ERCC1, respectively. The study results demonstrated the feasibility of improving accuracy of predicting cancer prognosis or recurrence risk using a CAD-based quantitative image analysis method.

  15. Improved Flat-Fielding for Crowded Field Imaging Polarimetry: The Star Forming BOK Globule B335

    NASA Astrophysics Data System (ADS)

    Wright, J.; Ilardi, P.; Clemens, D.

    1996-05-01

    In order to achieve a qualitative improvement in descriptions of the magnetic fields associated with small, dense molecular clouds, traced via background starlight polarization, a CCD-based imaging polarimeter was built and operated. This instrument simultaneously measured the linear polarizations of 25 - 50 stars per 5 arcminute CCD frame (Clemens and Leach 1987, Optical Engineering, 29, 923). The instrument consisted of a rotating polaroid in front of a quarter-wave plate, broadband V filter, and a TI 800 x 800 pixel CCD. The observing mode involved polarization chopping and dual direction CCD charge shifting, resulting in images which contained two stellar objects for each star. However, decoupling of the flat-field response of the CCD from the shifted-and-added images is not trivial. One solution was to ignore the flat-field step and proceed to photometric and then polarimetric analysis of the star light, as was performed by Kane in his dissertation (1995, Boston University). Revisiting the flat-fielding problem, we have developed a new approach which borrows from techniques employed in ground-based infrared astronomy. In this method, we analyze the shape of the stellar background (sky) imaged onto the CCD, via comparison with dome flats, to develop a detailed sky model. Subtraction of this sky model from the raw images leaves sky corrected stellar images. Stellar pair matching and classification is used to extract the individual stellar PSFs for direct flat field correction before performing photometry. This detailed analysis results in lower random and systematic errors than our previous method. In this poster, we present the polarimetric map deduced from the light of stars behind the periphery of the star forming globule B335, and compare the polarimetric limits obtained using our two methods.

  16. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. PMID:27374127

  17. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  18. Stem Cell Imaging: Tools to Improve Cell Delivery and Viability

    PubMed Central

    Wang, Junxin; Jokerst, Jesse V.

    2016-01-01

    Stem cell therapy (SCT) has shown very promising preclinical results in a variety of regenerative medicine applications. Nevertheless, the complete utility of this technology remains unrealized. Imaging is a potent tool used in multiple stages of SCT and this review describes the role that imaging plays in cell harvest, cell purification, and cell implantation, as well as a discussion of how imaging can be used to assess outcome in SCT. We close with some perspective on potential growth in the field. PMID:26880997

  19. Image reconstruction from Pulsed Fast Neutron Analysis

    NASA Astrophysics Data System (ADS)

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-06-01

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.

  20. Image reconstruction from Pulsed Fast Neutron Analysis

    SciTech Connect

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-06-10

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.

  1. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  2. Noise analysis in laser speckle contrast imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Chen, Yu; Dunn, Andrew K.; Boas, David A.

    2010-02-01

    Laser speckle contrast imaging (LSCI) is becoming an established method for full-field imaging of blood flow dynamics in animal models. A reliable quantitative model with comprehensive noise analysis is necessary to fully utilize this technique in biomedical applications and clinical trials. In this study, we investigated several major noise sources in LSCI: periodic physiology noise, shot noise and statistical noise. (1) We observed periodic physiology noise in our experiments and found that its sources consist principally of motions induced by heart beats and/or ventilation. (2) We found that shot noise caused an offset of speckle contrast (SC) values, and this offset is directly related to the incident light intensity. (3) A mathematical model of statistical noise was also developed. The model indicated that statistical noise in speckle contrast imaging is related to the SC values and the total number of pixels used in the SC calculation. Our experimental results are consistent with theoretical predications, as well as with other published works.

  3. Improved image retrieval based on fuzzy colour feature vector

    NASA Astrophysics Data System (ADS)

    Ben-Ahmeida, Ahlam M.; Ben Sasi, Ahmed Y.

    2013-03-01

    One of Image indexing techniques is the Content-Based Image Retrieval which is an efficient way for retrieving images from the image database automatically based on their visual contents such as colour, texture, and shape. In this paper will be discuss how using content-based image retrieval (CBIR) method by colour feature extraction and similarity checking. By dividing the query image and all images in the database into pieces and extract the features of each part separately and comparing the corresponding portions in order to increase the accuracy in the retrieval. The proposed approach is based on the use of fuzzy sets, to overcome the problem of curse of dimensionality. The contribution of colour of each pixel is associated to all the bins in the histogram using fuzzy-set membership functions. As a result, the Fuzzy Colour Histogram (FCH), outperformed the Conventional Colour Histogram (CCH) in image retrieving, due to its speedy results, where were images represented as signatures that took less size of memory, depending on the number of divisions. The results also showed that FCH is less sensitive and more robust to brightness changes than the CCH with better retrieval recall values.

  4. Computerized PET/CT image analysis in the evaluation of tumour response to therapy

    PubMed Central

    Wang, J; Zhang, H H

    2015-01-01

    Current cancer therapy strategy is mostly population based, however, there are large differences in tumour response among patients. It is therefore important for treating physicians to know individual tumour response. In recent years, many studies proposed the use of computerized positron emission tomography/CT image analysis in the evaluation of tumour response. Results showed that computerized analysis overcame some major limitations of current qualitative and semiquantitative analysis and led to improved accuracy. In this review, we summarize these studies in four steps of the analysis: image registration, tumour segmentation, image feature extraction and response evaluation. Future works are proposed and challenges described. PMID:25723599

  5. Image interpretation for landforms using expert systems and terrain analysis

    SciTech Connect

    Al-garni, A.M.

    1992-01-01

    Most current research in digital photogrammetry and computer vision concentrates on the early vision process; little research has been done on the late vision process. The late vision process in image interpretation contains descriptive knowledge and heuristic information which requires artificial intelligence (AI) in order for it to be modeled. This dissertation has introduced expert systems, as AI tools, and terrain analysis to the late vision process. This goal has been achieved by selecting and theorizing landforms as features for image interpretation. These features present a wide spectrum of knowledge that can furnish a good foundation for image interpretation processes. In this dissertation an EXpert system for LANdform interpretation using Terrain analysis (EXPLANT) was developed. EXPLANT can interpret the major landforms on earth. The system contains sample military and civilian consultations regarding site analysis and development. A learning mechanism was developed to accommodate necessary improvement for the data base due to the rapidly advancing and dynamically changing technology and knowledge. Many interface facilities and menu-driven screens were developed in the system to aid the users. Finally, the system has been tested and verified to be working properly.

  6. Scanning tunneling microscopy on rough surfaces-quantitative image analysis

    NASA Astrophysics Data System (ADS)

    Reiss, G.; Brückl, H.; Vancea, J.; Lecheler, R.; Hastreiter, E.

    1991-07-01

    In this communication, the application of scanning tunneling microscopy (STM) for a quantitative evaluation of roughnesses and mean island sizes of polycrystalline thin films is discussed. Provided strong conditions concerning the resolution are satisfied, the results are in good agreement with standard techniques as, for example, transmission electron microscopy. Owing to its high resolution, STM can supply a better characterization of surfaces than established methods, especially concerning the roughness. Microscopic interpretations of surface dependent physical properties thus can be considerably improved by a quantitative analysis of STM images.

  7. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  8. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  9. Superfast robust digital image correlation analysis with parallel computing

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Tian, Long

    2015-03-01

    Existing digital image correlation (DIC) using the robust reliability-guided displacement tracking (RGDT) strategy for full-field displacement measurement is a path-dependent process that can only be executed sequentially. This path-dependent tracking strategy not only limits the potential of DIC for further improvement of its computational efficiency but also wastes the parallel computing power of modern computers with multicore processors. To maintain the robustness of the existing RGDT strategy and to overcome its deficiency, an improved RGDT strategy using a two-section tracking scheme is proposed. In the improved RGDT strategy, the calculated points with correlation coefficients higher than a preset threshold are all taken as reliably computed points and given the same priority to extend the correlation analysis to their neighbors. Thus, DIC calculation is first executed in parallel at multiple points by separate independent threads. Then for the few calculated points with correlation coefficients smaller than the threshold, DIC analysis using existing RGDT strategy is adopted. Benefiting from the improved RGDT strategy and the multithread computing, superfast DIC analysis can be accomplished without sacrificing its robustness and accuracy. Experimental results show that the presented parallel DIC method performed on a common eight-core laptop can achieve about a 7 times speedup.

  10. A Novel Chaotic Map and an Improved Chaos-Based Image Encryption Scheme

    PubMed Central

    2014-01-01

    In this paper, we present a novel approach to create the new chaotic map and propose an improved image encryption scheme based on it. Compared with traditional classic one-dimensional chaotic maps like Logistic Map and Tent Map, this newly created chaotic map demonstrates many better chaotic properties for encryption, implied by a much larger maximal Lyapunov exponent. Furthermore, the new chaotic map and Arnold's Cat Map based image encryption method is designed and proved to be of solid robustness. The simulation results and security analysis indicate that such method not only can meet the requirement of imagine encryption, but also can result in a preferable effectiveness and security, which is usable for general applications. PMID:25143990

  11. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press. PMID:27606187

  12. Improving the Performance of Three-Mirror Imaging Systems with Freeform Optics

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Wolbach, Steven

    2013-01-01

    The image quality improvement for three-mirror systems by Freeform Optics is surveyed over various f-number and field specifications. Starting with the Korsch solution, we increase the surface shape degrees of freedom and record the improvements.

  13. Improving performance of real-time multispectral imaging system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A real-time multispectral imaging system can be a science-based tool for fecal and ingesta contaminant detection during poultry processing. For the implementation of this imaging system at commercial poultry processing plant, false positive errors must be removed. For doing this, we tested and imp...

  14. Improved hyperspectral imaging system for fecal detection on poultry carcasses

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Agricultural Research Service (ARS) has developed imaging technology to detect fecal contaminants on poultry carcasses. The hyperspectral imaging system operates from about 400 to 1000 nm, but only a few wavelengths are used in a real-time multispectral system. Recently, the upgraded system, inc...

  15. Improved egg crack detection algorithm for modified pressure imaging system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Shell eggs with microcracks are often undetected during egg grading processes. In the past, a modified pressure imaging system was developed to detect eggs with microcracks without adversely affecting the quality of normal intact eggs. The basic idea of the modified pressure imaging system was to ap...

  16. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  17. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  18. An improved image sharpness assessment method based on contrast sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Tian, Yan; Yin, Yili

    2015-10-01

    An image sharpness assessment method based on the property of Contrast Sensitivity Function (CSF) was proposed to realize the sharpness assessment of unfocused image. Firstly, image was performed the two-dimensional Discrete Fourier Transform (DFT), and intermediate frequency coefficients and high frequency coefficients are divided into two parts respectively. Secondly the four parts were performed the inverse Discrete Fourier Transform (IDFT) to obtain subimages. Thirdly, using Range Function evaluates the four sub-image sharpness value. Finally, the image sharpness is obtained through the weighted sum of the sub-image sharpness value. In order to comply with the CSF characteristics, weighting factor is setting based on the Contrast Sensitivity Function. The new algorithm and four typical evaluation algorithm: Fourier, Range , Variance and Wavelet are evaluated based on the six quantitative evaluation index, which include the width of steep part of focusing curve, the ration of sharpness, the steepness, the variance of float part of focusing curve, the factor of local extreme and the sensitivity. On the other hand, the effect of noise, and image content on algorithm is analyzed in this paper. The experiment results show that the new algorithm has better performance of sensitivity, anti-nose than the four typical evaluation algorithms. The evaluation results are consistent with human visual characteristics.

  19. Automated Digital Image Analysis of islet cell mass using Nikon's inverted Eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2013-04-29

    Reliable assessment of islet viability, mass and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples,but this technique may be susceptible to inter / intra observer variability, which may induce false positive / negative islet counts. Here we describe a simple, reliable, automated digitalimage analysis (ADIA) technique, for accurately quantifying islets into total islet number,islet equivalent number (IEQ), and islet purity before islet transplantation.Islets were isolated and purified from n=42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone,and expressed as IEQ number. Islets were analyzed manually by microscopy, or automaticallyquantified using Nikon's inverted Eclipse Ti microscope, with built in NIS-ElementsAdvanced Research (AR) software.The AIDA method significantly enhanced the number of islet preparations eligible forengraftment compared to the standard manual method (P<0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(sup)2(/sup)≤0.91), and total islet number (r(sup)2(/sup)=0.88), and thus, increased to (r(sup)2(/sup)=0.93) when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intra-observer reproducibility compared to the standard manual method (P<0.001). However, islet purity was routinely estimated as significantly higher with the manual method vs. the ADIA method(p<0.001). The ADIA method also detected small islets between 10-50 μm in size.Automated digital image analysis utilizing the Nikon Instruments (Nikon) software is anunbiased, simple, and reliable teaching tool to comprehensively assess the individual size ofeach islet cell preparation prior to transplantation. Implementation of

  20. Improving the visualization and detection of tissue folds in whole slide images through color enhancement

    PubMed Central

    Bautista, Pinky A.; Yagi, Yukako

    2010-01-01

    Objective: The objective of this paper is to improve the visualization and detection of tissue folds, which are prominent among tissue slides, from the pre-scan image of a whole slide image by introducing a color enhancement method that enables the differentiation between fold and non-fold image pixels. Method: The weighted difference between the color saturation and luminance of the image pixels is used as shifting factor to the original RGB color of the image. Results: Application of the enhancement method to hematoxylin and eosin (H&E) stained images improves the visualization of tissue folds regardless of the colorimetric variations in the images. Detection of tissue folds after application of the enhancement also improves but the presence of nuclei, which are also stained dark like the folds, was found to sometimes affect the detection accuracy. Conclusion: The presence of tissue artifacts could affect the quality of whole slide images, especially that whole slide scanners select the focus points from the pre-scan image wherein the artifacts are indistinguishable from real tissue area. We have a presented in this paper an enhancement scheme that improves the visualization and detection of tissue folds from pre-scan images. Since the method works on the simulated pre-scan images its integration to the actual whole slide imaging process should also be possible. PMID:21221170

  1. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  2. Evaluation of improvement of diffuse optical imaging of brain function by high-density probe arrangements and imaging algorithms

    NASA Astrophysics Data System (ADS)

    Sakakibara, Yusuke; Kurihara, Kazuki; Okada, Eiji

    2016-04-01

    Diffuse optical imaging has been applied to measure the localized hemodynamic responses to brain activation. One of the serious problems with diffuse optical imaging is the limitation of the spatial resolution caused by the sparse probe arrangement and broadened spatial sensitivity profile for each probe pair. High-density probe arrangements and an image reconstruction algorithm considering the broadening of the spatial sensitivity can improve the spatial resolution of the image. In this study, the diffuse optical imaging of the absorption change in the brain is simulated to evaluate the effect of the high-density probe arrangements and imaging methods. The localization error, equivalent full-width half maximum and circularity of the absorption change in the image obtained by the mapping and reconstruction methods from the data measured by five probe arrangements are compared to quantitatively evaluate the imaging methods and probe arrangements. The simple mapping method is sufficient for the density of the measurement points up to the double-density probe arrangement. The image reconstruction method considering the broadening of the spatial sensitivity of the probe pairs can effectively improve the spatial resolution of the image obtained from the probe arrangements higher than the quadruple density, in which the distance between the neighboring measurement points is 10.6 mm.

  3. Exponential filtering of singular values improves photoacoustic image reconstruction.

    PubMed

    Bhatt, Manish; Gutta, Sreedevi; Yalavarthy, Phaneendra K

    2016-09-01

    Model-based image reconstruction techniques yield better quantitative accuracy in photoacoustic image reconstruction. In this work, an exponential filtering of singular values was proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR-based reconstruction algorithms for three digital phantom cases with varying signal-to-noise ratios of data. It was shown that exponential filtering provides superior photoacoustic images of better quantitative accuracy. Moreover, the proposed filtering approach was observed to be less biased toward the regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov filtering framework. It was also shown that the standard Tikhonov filtering becomes an approximation to the proposed exponential filtering. PMID:27607501

  4. Object-oriented framework for rapid development of image analysis applications

    NASA Astrophysics Data System (ADS)

    Liang, Weidong; Zhang, Xiangmin; Sonka, Milan

    1997-04-01

    Image analysis applications are usually composed of a set of graphic objects, a set of image processing algorithms, and a graphic user interface (GUI). Typically, developing an image analysis application is time-consuming and the developed programs are hard to maintain. We have developed a framework called IMANAL that aims at reducing the development costs by improving system maintainability, design change flexibility, component reusability, and human-computer interaction. IMANAL decomposes an image analysis application into three models; data model, process model, and GUI model. The three models as well as the collaboration among them are standardized into a unified system architecture. A new application can be developed rapidly by customizing task- specific building blocks within the unified architecture. IMANAL maintains a class library of more than 100,000 lines of C/C++ code that are highly reusable for creating the three above mentioned models. Software components from other sources such as Khoros can also be easily included in the applications. IMANAL was used for development of image analysis applications utilizing a variety of medical images such as x-ray coronary angiography, intracardiac, intravascular and brachial ultrasound, and pulmonary CT. In all the above listed applications, the development overhead is removed and the developer is able to fully focus on the image analysis algorithms. IMANAL has proven to be a useful tool for image analysis research as well as the prototype development tool for commercial image analysis applications.

  5. Automated retinal image analysis for diabetic retinopathy in telemedicine.

    PubMed

    Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

    2015-03-01

    There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

  6. Live imaging and analysis of postnatal mouse retinal development

    PubMed Central

    2013-01-01

    Background The explanted, developing rodent retina provides an efficient and accessible preparation for use in gene transfer and pharmacological experimentation. Many of the features of normal development are retained in the explanted retina, including retinal progenitor cell proliferation, heterochronic cell production, interkinetic nuclear migration, and connectivity. To date, live imaging in the developing retina has been reported in non-mammalian and mammalian whole-mount samples. An integrated approach to rodent retinal culture/transfection, live imaging, cell tracking, and analysis in structurally intact explants greatly improves our ability to assess the kinetics of cell production. Results In this report, we describe the assembly and maintenance of an in vitro, CO2-independent, live mouse retinal preparation that is accessible by both upright and inverted, 2-photon or confocal microscopes. The optics of this preparation permit high-quality and multi-channel imaging of retinal cells expressing fluorescent reporters for up to 48h. Tracking of interkinetic nuclear migration within individual cells, and changes in retinal progenitor cell morphology are described. Follow-up, hierarchical cluster screening revealed that several different dependent variable measures can be used to identify and group movement kinetics in experimental and control samples. Conclusions Collectively, these methods provide a robust approach to assay multiple features of rodent retinal development using live imaging. PMID:23758927

  7. [An improved motion estimation of medical image series via wavelet transform].

    PubMed

    Zhang, Ying; Rao, Nini; Wang, Gang

    2006-10-01

    The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333

  8. Image stretching on a curved surface to improve satellite gridding

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.

    1975-01-01

    A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.

  9. Assessing and improving cobalt-60 digital tomosynthesis image quality

    NASA Astrophysics Data System (ADS)

    Marsh, Matthew B.; Schreiner, L. John; Kerr, Andrew T.

    2014-03-01

    Image guidance capability is an important feature of modern radiotherapy linacs, and future cobalt-60 units will be expected to have similar capabilities. Imaging with the treatment beam is an appealing option, for reasons of simplicity and cost, but the dose needed to produce cone beam CT (CBCT) images in a Co-60 treatment beam is too high for this modality to be clinically useful. Digital tomosynthesis (DT) offers a quasi-3D image, of sufficient quality to identify bony anatomy or fiducial markers, while delivering a much lower dose than CBCT. A series of experiments were conducted on a prototype Co-60 cone beam imaging system to quantify the resolution, selectivity, geometric accuracy and contrast sensitivity of Co-60 DT. Although the resolution is severely limited by the penumbra cast by the ~2 cm diameter source, it is possible to identify high contrast objects on the order of 1 mm in width, and bony anatomy in anthropomorphic phantoms is clearly recognizable. Low contrast sensitivity down to electron density differences of 3% is obtained, for uniform features of similar thickness. The conventional shift-and-add reconstruction algorithm was compared to several variants of the Feldkamp-Davis-Kress filtered backprojection algorithm result. The Co-60 DT images were obtained with a total dose of 5 to 15 cGy each. We conclude that Co-60 radiotherapy units upgraded for modern conformal therapy could also incorporate imaging using filtered backprojection DT in the treatment beam. DT is a versatile and promising modality that would be well suited to image guidance requirements.

  10. Image analysis of atmospheric corrosion of field exposure high strength aluminium alloys

    NASA Astrophysics Data System (ADS)

    Tao, Lei; Song, Shizhe; Zhang, Xiaoyun; Zhang, Zheng; Lu, Feng

    2008-08-01

    The corrosion morphology image acquisition system which can be used in the field was established. In Beijing atmospheric corrosion exposure station, the image acquisition system was used to capture the early stage corrosion morphology of five types of high strength aluminium alloy specimens. After the denoise treatment, wavelet-based image analysis method was applied to decompose the improved images and energies of sub-images were extracted as character information. Based on the variation of image energy values, the corrosion degree of aluminium alloy specimens was qualitatively and quantitatively analyzed. The conclusion was basically identical with the result based on the corrosion weight loss. This method is supposed to be effective to analysis and quantify the corrosion damage from image of field exposure aluminium alloy specimens.

  11. Spectroscopic imaging with improved gradient modulated constant adiabaticity pulses on high-field clinical scanners

    NASA Astrophysics Data System (ADS)

    Andronesi, Ovidiu C.; Ramadan, Saadallah; Ratai, Eva-Maria; Jennings, Dominique; Mountford, Carolyn E.; Sorensen, A. Gregory

    2010-04-01

    The purpose of this work was to design and implement constant adiabaticity gradient modulated pulses that have improved slice profiles and reduced artifacts for spectroscopic imaging on 3 T clinical scanners equipped with standard hardware. The newly proposed pulses were designed using the gradient offset independent adiabaticity (GOIA, Tannus and Garwood [13]) method using WURST modulation for RF and gradient waveforms. The GOIA-WURST pulses were compared with GOIA-HS n (GOIA based on nth-order hyperbolic secant) and FOCI (frequency offset corrected inversion) pulses of the same bandwidth and duration. Numerical simulations and experimental measurements in phantoms and healthy volunteers are presented. GOIA-WURST pulses provide improved slice profile that have less slice smearing for off-resonance frequencies compared to GOIA-HS n pulses. The peak RF amplitude of GOIA-WURST is much lower (40% less) than FOCI but slightly higher (14.9% more) to GOIA-HS n. The quality of spectra as shown by the analysis of lineshapes, eddy currents artifacts, subcutaneous lipid contamination and SNR is improved for GOIA-WURST. GOIA-WURST pulse tested in this work shows that reliable spectroscopic imaging could be obtained in routine clinical setup and might facilitate the use of clinical spectroscopy.

  12. Xmipp 3.0: an improved software suite for image processing in electron microscopy.

    PubMed

    de la Rosa-Trevín, J M; Otón, J; Marabini, R; Zaldívar, A; Vargas, J; Carazo, J M; Sorzano, C O S

    2013-11-01

    Xmipp is a specialized software package for image processing in electron microscopy, and that is mainly focused on 3D reconstruction of macromolecules through single-particles analysis. In this article we present Xmipp 3.0, a major release which introduces several improvements and new developments over the previous version. A central improvement is the concept of a project that stores the entire processing workflow from data import to final results. It is now possible to monitor, reproduce and restart all computing tasks as well as graphically explore the complete set of interrelated tasks associated to a given project. Other graphical tools have also been improved such as data visualization, particle picking and parameter "wizards" that allow the visual selection of some key parameters. Many standard image formats are transparently supported for input/output from all programs. Additionally, results have been standardized, facilitating the interoperation between different Xmipp programs. Finally, as a result of a large code refactoring, the underlying C++ libraries are better suited for future developments and all code has been optimized. Xmipp is an open-source package that is freely available for download from: http://xmipp.cnb.csic.es. PMID:24075951

  13. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  14. MaZda--a software package for image texture analysis.

    PubMed

    Szczypiński, Piotr M; Strzelecki, Michał; Materka, Andrzej; Klepaczko, Artur

    2009-04-01

    MaZda, a software package for 2D and 3D image texture analysis is presented. It provides a complete path for quantitative analysis of image textures, including computation of texture features, procedures for feature selection and extraction, algorithms for data classification, various data visualization and image segmentation tools. Initially, MaZda was aimed at analysis of magnetic resonance image textures. However, it revealed its effectiveness in analysis of other types of textured images, including X-ray and camera images. The software was utilized by numerous researchers in diverse applications. It was proven to be an efficient and reliable tool for quantitative image analysis, even in more accurate and objective medical diagnosis. MaZda was also successfully used in food industry to assess food product quality. MaZda can be downloaded for public use from the Institute of Electronics, Technical University of Lodz webpage. PMID:18922598

  15. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  16. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  17. Thermal image analysis for detecting facemask leakage

    NASA Astrophysics Data System (ADS)

    Dowdall, Jonathan B.; Pavlidis, Ioannis T.; Levine, James

    2005-03-01

    Due to the modern advent of near ubiquitous accessibility to rapid international transportation the epidemiologic trends of highly communicable diseases can be devastating. With the recent emergence of diseases matching this pattern, such as Severe Acute Respiratory Syndrome (SARS), an area of overt concern has been the transmission of infection through respiratory droplets. Approved facemasks are typically effective physical barriers for preventing the spread of viruses through droplets, but breaches in a mask"s integrity can lead to an elevated risk of exposure and subsequent infection. Quality control mechanisms in place during the manufacturing process insure that masks are defect free when leaving the factory, but there remains little to detect damage caused by transportation or during usage. A system that could monitor masks in real-time while they were in use would facilitate a more secure environment for treatment and screening. To fulfill this necessity, we have devised a touchless method to detect mask breaches in real-time by utilizing the emissive properties of the mask in the thermal infrared spectrum. Specifically, we use a specialized thermal imaging system to detect minute air leakage in masks based on the principles of heat transfer and thermodynamics. The advantage of this passive modality is that thermal imaging does not require contact with the subject and can provide instant visualization and analysis. These capabilities can prove invaluable for protecting personnel in scenarios with elevated levels of transmission risk such as hospital clinics, border check points, and airports.

  18. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  19. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  20. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  1. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. PMID:26556680

  2. Improvement of retinal blood vessel detection using morphological component analysis.

    PubMed

    Imani, Elaheh; Javidi, Malihe; Pourreza, Hamid-Reza

    2015-03-01

    Detection and quantitative measurement of variations in the retinal blood vessels can help diagnose several diseases including diabetic retinopathy. Intrinsic characteristics of abnormal retinal images make blood vessel detection difficult. The major problem with traditional vessel segmentation algorithms is producing false positive vessels in the presence of diabetic retinopathy lesions. To overcome this problem, a novel scheme for extracting retinal blood vessels based on morphological component analysis (MCA) algorithm is presented in this paper. MCA was developed based on sparse representation of signals. This algorithm assumes that each signal is a linear combination of several morphologically distinct components. In the proposed method, the MCA algorithm with appropriate transforms is adopted to separate vessels and lesions from each other. Afterwards, the Morlet Wavelet Transform is applied to enhance the retinal vessels. The final vessel map is obtained by adaptive thresholding. The performance of the proposed method is measured on the publicly available DRIVE and STARE datasets and compared with several state-of-the-art methods. An accuracy of 0.9523 and 0.9590 has been respectively achieved on the DRIVE and STARE datasets, which are not only greater than most methods, but are also superior to the second human observer's performance. The results show that the proposed method can achieve improved detection in abnormal retinal images and decrease false positive vessels in pathological regions compared to other methods. Also, the robustness of the method in the presence of noise is shown via experimental result. PMID:25697986

  3. Improvement of temporal resolution in blood concentration imaging using NIR speckle patterns

    NASA Astrophysics Data System (ADS)

    Yokoi, Naomichi; Shimatani, Yuichi; Kyoso, Masaki; Funamizu, Hideki; Aizu, Yoshihisa

    2013-06-01

    In the imaging of blood concentration change using near infrared bio-speckles, temporal averaging of speckle images is necessary for speckle reduction. To improve the temporal resolution in blood concentration imaging, use of spatial averaging is investigated to measured data in rat experiments. Results show that three frames in temporal averaging with (2×2) pixels in spatial averaging can be accepted to obtain the temporal resolution of ten concentration images per second.

  4. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  5. An improved mechanism for reversible watermarking of images

    NASA Astrophysics Data System (ADS)

    Ishtiaq, Muhammad; Haider, Bilal; Ali, Faiz; Khan, Salabat

    2013-07-01

    Reversible watermarking algorithms allow extraction of the hidden information to ensure authenticity and restore original medium. Prediction-error expansion (PEE) methods give high payload with little or no visible distortion. In these methods performance of the algorithm relies heavily on predictor`s response. In this study image pixels are predicted using 8 pixels within a 3×3 neighborhood of a pixel. Image pixels are divided into two sets and information bits in each set are embedded using PEE and histogram shifting (HS). Better prediction in conjunction with HS increases imperceptibility of the watermarked image. Experimental results indicate superior performance of the proposed scheme in comparison to recent methods published in the literature.

  6. Improved Phased Array Imaging of a Model Jet

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Podboy, Gary G.

    2010-01-01

    An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.

  7. Improving the image and quantitative data of magnetic resonance imaging through hardware and physics techniques

    NASA Astrophysics Data System (ADS)

    Kaggie, Joshua D.

    In Chapter 1, an introduction to basic principles or MRI is given, including the physical principles, basic pulse sequences, and basic hardware. Following the introduction, five different published and yet unpublished papers for improving the utility of MRI are shown. Chapter 2 discusses a small rodent imaging system that was developed for a clinical 3 T MRI scanner. The system integrated specialized radiofrequency (RF) coils with an insertable gradient, enabling 100 microm isotropic resolution imaging of the guinea pig cochlea in vivo, doubling the body gradient strength, slew rate, and contrast-to-noise ratio, and resulting in twice the signal-to-noise (SNR) when compared to the smallest conforming birdcage. Chapter 3 discusses a system using BOLD MRI to measure T2* and invasive fiberoptic probes to measure renal oxygenation (pO2). The significance of this experiment is that it demonstrated previously unknown physiological effects on pO2, such as breath-holds that had an immediate (<1 sec) pO2 decrease (˜6 mmHg), and bladder pressure that had pO2 increases (˜6 mmHg). Chapter 4 determined the correlation between indicators of renal health and renal fat content. The R2 correlation between renal fat content and eGFR, serum cystatin C, urine protein, and BMI was less than 0.03, with a sample size of ˜100 subjects, suggesting that renal fat content will not be a useful indicator of renal health. Chapter 5 is a hardware and pulse sequence technique for acquiring multinuclear 1H and 23Na data within the same pulse sequence. Our system demonstrated a very simple, inexpensive solution to SMI and acquired both nuclei on two 23Na channels using external modifications, and is the first demonstration of radially acquired SMI. Chapter 6 discusses a composite sodium and proton breast array that demonstrated a 2-5x improvement in sodium SNR and similar proton SNR when compared to a large coil with a linear sodium and linear proton channel. This coil is unique in that sodium

  8. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1994-01-01

    The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

  9. Precision Improvement of Photogrammetry by Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Shih, Ming-Hsiang; Sung, Wen-Pei; Tung, Shih-Heng; Hsiao, Hanwei

    2016-04-01

    The combination of aerial triangulation technology and unmanned aerial vehicle greatly reduces the cost and application threshold of the digital surface model technique. Based on the report in the literatures, the measurement error in the x-y coordinate and in the elevation lies between 8cm~15cm and 10cm~20cm respectively. The measurement accuracy for the geological structure survey already has sufficient value, but for the slope and structures in terms of deformation monitoring is inadequate. The main factors affecting the accuracy of the aerial triangulation are image quality, measurement accuracy of control point and image matching accuracy. In terms of image matching, the commonly used techniques are Harris Corner Detection and Scale Invariant Feature Transform (SIFT). Their pairing error is in scale of pixels, usually lies between 1 to 2 pixels. This study suggests that the error on the pairing is the main factor causing the aerial triangulation errors. Therefore, this study proposes the application of Digital Image Correlation (DIC) method instead of the pairing method mentioned above. DIC method can provide a pairing accuracy of less than 0.01 pixel, indeed can greatly enhance the accuracy of the aerial triangulation, to have sub-centimeter level accuracy. In this study, the effects of image pairing error on the measurement error of the 3-dimensional coordinate of the ground points are explored by numerical simulation method. It was confirmed that when the image matching error is reduced to 0.01 pixels, the ground three-dimensional coordinate measurement error can be controlled in mm level. A combination of DIC technique and the traditional aerial triangulation provides the potential of application on the deformation monitoring of slope and structures, and achieve an early warning of natural disaster.

  10. Enhanced imaging of microcalcifications in digital breast tomosynthesis through improved image-reconstruction algorithms

    SciTech Connect

    Sidky, Emil Y.; Pan Xiaochuan; Reiser, Ingrid S.; Nishikawa, Robert M.; Moore, Richard H.; Kopans, Daniel B.

    2009-11-15

    Purpose: The authors develop a practical, iterative algorithm for image-reconstruction in undersampled tomographic systems, such as digital breast tomosynthesis (DBT). Methods: The algorithm controls image regularity by minimizing the image total p variation (TpV), a function that reduces to the total variation when p=1.0 or the image roughness when p=2.0. Constraints on the image, such as image positivity and estimated projection-data tolerance, are enforced by projection onto convex sets. The fact that the tomographic system is undersampled translates to the mathematical property that many widely varied resultant volumes may correspond to a given data tolerance. Thus the application of image regularity serves two purposes: (1) Reduction in the number of resultant volumes out of those allowed by fixing the data tolerance, finding the minimum image TpV for fixed data tolerance, and (2) traditional regularization, sacrificing data fidelity for higher image regularity. The present algorithm allows for this dual role of image regularity in undersampled tomography. Results: The proposed image-reconstruction algorithm is applied to three clinical DBT data sets. The DBT cases include one with microcalcifications and two with masses. Conclusions: Results indicate that there may be a substantial advantage in using the present image-reconstruction algorithm for microcalcification imaging.

  11. High resolution ultraviolet imaging spectrometer for latent image analysis.

    PubMed

    Lyu, Hang; Liao, Ningfang; Li, Hongsong; Wu, Wenmin

    2016-03-21

    In this work, we present a close-range ultraviolet imaging spectrometer with high spatial resolution, and reasonably high spectral resolution. As the transmissive optical components cause chromatic aberration in the ultraviolet (UV) spectral range, an all-reflective imaging scheme is introduced to promote the image quality. The proposed instrument consists of an oscillating mirror, a Cassegrain objective, a Michelson structure, an Offner relay, and a UV enhanced CCD. The finished spectrometer has a spatial resolution of 29.30μm on the target plane; the spectral scope covers both near and middle UV band; and can obtain approximately 100 wavelength samples over the range of 240~370nm. The control computer coordinates all the components of the instrument and enables capturing a series of images, which can be reconstructed into an interferogram datacube. The datacube can be converted into a spectrum datacube, which contains spectral information of each pixel with many wavelength samples. A spectral calibration is carried out by using a high pressure mercury discharge lamp. A test run demonstrated that this interferometric configuration can obtain high resolution spectrum datacube. The pattern recognition algorithm is introduced to analyze the datacube and distinguish the latent traces from the base materials. This design is particularly good at identifying the latent traces in the application field of forensic imaging. PMID:27136837

  12. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  13. Improving Appropriateness and Quality in Cardiovascular Imaging: A Review of the Evidence.

    PubMed

    Bhattacharyya, Sanjeev; Lloyd, Guy

    2015-12-01

    High-quality cardiovascular imaging requires a structured process to ensure appropriate patient selection, accurate and reproducible data acquisition, and timely reporting which answers clinical questions and improves patient outcomes. Several guidelines provide frameworks to assess quality. This article reviews interventions to improve quality in cardiovascular imaging, including methods to reduce inappropriate testing, improve accuracy, reduce interobserver variability, and reduce diagnostic and reporting errors. PMID:26628582

  14. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  15. Improved Crack Type Classification Neural Network based on Square Sub-images of Pavement Surface

    NASA Astrophysics Data System (ADS)

    Lee, Byoung Jik; Lee, Hosin “David”

    The previous neural network based on the proximity values was developed using rectangular pavement images. However, the proximity value derived from the rectangular image was biased towards transverse cracking. By sectioning the rectangular image into a set of square sub-images, the neural network based on the proximity value became more robust and consistent in determining a crack type. This paper presents an improved neural network to determine a crack type from a pavement surface image based on square sub-images over the neural network trained using rectangular pavement images. The advantage of using square sub-image is demonstrated by using sample images of transverse cracking, longitudinal cracking and alligator cracking.

  16. Improving the Self-Image of the Socially Disabled

    ERIC Educational Resources Information Center

    Matthews, Lillian B.

    1975-01-01

    Reviewing the literature on physical attractiveness' relationship to selself-image and social acceptability, the author points out the need for self-care courses as "social therapy," gives a step-by-step procedure to develop such programs for the institutionalized, and tells how to become a teacher in a social therapy program. (AJ)

  17. Indeterminacy and Image Improvement in Snake Infrared ``Vision''

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2006-03-01

    Many snake species have infrared sense organs located on their head that can detect warm-blooded prey even in total darkness. The physical mechanism underlying this sense is that of a pinhole camera. The infrared image is projected onto a sensory `pit membrane' of small size (of order mm^2). To get a neuronal response the energy flux per unit time has to exceed a minimum threshold; furthermore, the source of this energy, the prey, is moving at a finite speed so the pinhole substituting for a lens has to be rather large (˜1 mm). Accordingly the image is totally blurred. We have therefore done two things. First, we have determined the precise optical resolution that a snake can achieve for a given input. Second, in view of known, though still restricted, precision one may ask whether, and how, a snake can reconstruct the original image. The point is that the information needed to reconstruct the original temperature distribution in space is still available. We present an explicit mathematical model [1] allowing even high-quality reconstruction from the low-quality image on the pit membrane and indicate how a neuronal implementation might be realized. Ref: [1] A.B. Sichert, P. Friedel, and J.L. van Hemmen, TU Munich preprint (2005).

  18. Improving Public Perception of Behavior Analysis.

    PubMed

    Freedman, David H

    2016-05-01

    The potential impact of behavior analysis is limited by the public's dim awareness of the field. The mass media rarely cover behavior analysis, other than to echo inaccurate negative stereotypes about control and punishment. The media instead play up appealing but less-evidence-based approaches to problems, a key example being the touting of dubious diets over behavioral approaches to losing excess weight. These sorts of claims distort or skirt scientific evidence, undercutting the fidelity of behavior analysis to scientific rigor. Strategies for better connecting behavior analysis with the public might include reframing the field's techniques and principles in friendlier, more resonant form; pushing direct outcome comparisons between behavior analysis and its rivals in simple terms; and playing up the "warm and fuzzy" side of behavior analysis. PMID:27606184

  19. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  20. Correlation between the signal-to-noise ratio improvement factor (KSNR) and clinical image quality for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Saunderson, J. R.; Beavis, A. W.

    2015-12-01

    This work assessed the appropriateness of the signal-to-noise ratio improvement factor (KSNR) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients  =  80). The KSNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. KSNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and KSNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of  -0.93 (p  =  0.015) was found for lung, a coefficient (R) of  -0.95 (p  =  0.46) was found for spine, and a coefficient (R) of  -0.85 (p  =  0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. KSNR increases with tube voltage but VGAS decreases. Medical physicists should use the KSNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely.

  1. Correlation between the signal-to-noise ratio improvement factor (KSNR) and clinical image quality for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Saunderson, J R; Beavis, A W

    2015-12-01

    This work assessed the appropriateness of the signal-to-noise ratio improvement factor (KSNR) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients  =  80). The KSNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. KSNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and KSNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of  -0.93 (p  =  0.015) was found for lung, a coefficient (R) of  -0.95 (p  =  0.46) was found for spine, and a coefficient (R) of  -0.85 (p  =  0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. KSNR increases with tube voltage but VGAS decreases. Medical physicists should use the KSNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely. PMID:26540441

  2. Dynamic chest image analysis: model-based pulmonary perfusion analysis with pyramid images

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Haapanen, Arto; Jaervi, Timo; Kiuru, Aaro J.; Kormano, Martti; Svedstrom, Erkki; Virkki, Raimo

    1998-07-01

    The aim of the study 'Dynamic Chest Image Analysis' is to develop computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected at different phases of the respiratory/cardiac cycles in a short period of time. We have proposed a framework for ventilation study with an explicit ventilation model based on pyramid images. In this paper, we extend the framework to pulmonary perfusion study. A perfusion model and the truncated pyramid are introduced. The perfusion model aims at extracting accurate, geographic perfusion parameters, and the truncated pyramid helps in understanding perfusion at multiple resolutions and speeding up the convergence process in optimization. Three cases are included to illustrate the experimental results.

  3. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    PubMed Central

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  4. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis. PMID:27526188

  5. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  6. Imaging biomarkers in multiple Sclerosis: From image analysis to population imaging.

    PubMed

    Barillot, Christian; Edan, Gilles; Commowick, Olivier

    2016-10-01

    The production of imaging data in medicine increases more rapidly than the capacity of computing models to extract information from it. The grand challenges of better understanding the brain, offering better care for neurological disorders, and stimulating new drug design will not be achieved without significant advances in computational neuroscience. The road to success is to develop a new, generic, computational methodology and to confront and validate this methodology on relevant diseases with adapted computational infrastructures. This new concept sustains the need to build new research paradigms to better understand the natural history of the pathology at the early phase; to better aggregate data that will provide the most complete representation of the pathology in order to better correlate imaging with other relevant features such as clinical, biological or genetic data. In this context, one of the major challenges of neuroimaging in clinical neurosciences is to detect quantitative signs of pathological evolution as early as possible to prevent disease progression, evaluate therapeutic protocols or even better understand and model the natural history of a given neurological pathology. Many diseases encompass brain alterations often not visible on conventional MRI sequences, especially in normal appearing brain tissues (NABT). MRI has often a low specificity for differentiating between possible pathological changes which could help in discriminating between the different pathological stages or grades. The objective of medical image analysis procedures is to define new quantitative neuroimaging biomarkers to track the evolution of the pathology at different levels. This paper illustrates this issue in one acute neuro-inflammatory pathology: Multiple Sclerosis (MS). It exhibits the current medical image analysis approaches and explains how this field of research will evolve in the next decade to integrate larger scale of information at the temporal, cellular

  7. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  8. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  9. Atomic force microscope, molecular imaging, and analysis.

    PubMed

    Chen, Shu-wen W; Teulon, Jean-Marie; Godon, Christian; Pellequer, Jean-Luc

    2016-01-01

    Image visibility is a central issue in analyzing all kinds of microscopic images. An increase of intensity contrast helps to raise the image visibility, thereby to reveal fine image features. Accordingly, a proper evaluation of results with current imaging parameters can be used for feedback on future imaging experiments. In this work, we have applied the Laplacian function of image intensity as either an additive component (Laplacian mask) or a multiplying factor (Laplacian weight) for enhancing image contrast of high-resolution AFM images of two molecular systems, an unknown protein imaged in air, provided by AFM COST Action TD1002 (http://www.afm4nanomedbio.eu/), and tobacco mosaic virus (TMV) particles imaged in liquid. Based on both visual inspection and quantitative representation of contrast measurements, we found that the Laplacian weight is more effective than the Laplacian mask for the unknown protein, whereas for the TMV system the strengthened Laplacian mask is superior to the Laplacian weight. The present results indicate that a mathematical function, as exemplified by the Laplacian function, may yield varied processing effects with different operations. To interpret the diversity of molecular structure and topology in images, an explicit expression for processing procedures should be included in scientific reports alongside instrumental setups. PMID:26224520

  10. Scanning ion images; analysis of pharmaceutical drugs at organelle levels

    NASA Astrophysics Data System (ADS)

    Larras-Regard, E.; Mony, M.-C.

    1995-05-01

    With the ion analyser IMS 4F used in microprobe mode, it is possible to obtain images of fields of 10 × 10 [mu]m2, corresponding to an effective magnification of 7000 with lateral resolution of 250 nm, technical characteristics that are appropriate for the size of cell organelles. It is possible to characterize organelles by their relative CN-, P- and S- intensities when the tissues are prepared by freeze fixation and freeze substitution. The recognition of organelles enables correlation of the tissue distribution of ebselen, a pharmaceutical drug containing selenium. The various metabolites characterized in plasma, bile and urine during biotransformation of ebselen all contain selenium, so the presence of the drug and its metabolites can be followed by images of Se. We were also able to detect the endogenous content of Se in tissue, due to the increased sensitivity of ion analysis in microprobe mode. Our results show a natural occurrence of Se in the border corresponding to the basal lamina of cells of proximal but not distal tubules of the kidney. After treatment of rats with ebselen, an additional site of Se is found in the lysosomes. We suggest that in addition to direct elimination of ebselen and its metabolites by glomerular filtration and urinary elimination, a second process of elimination may occur: Se compounds reaching the epithelial cells via the basal lamina accumulate in lysosomes prior to excretion into the tubular fluid. The technical developments of using the IMS 4F instrument in the microprobe mode and the improvement in preparation of samples by freeze fixation and substitution further extend the limit of ion analysis in biology. Direct imaging of trace elements and molecules marked with a tracer make it possible to determine their targets by comparison with images of subcellular structures. This is a promising advance in the study of pathways of compounds within tissues, cells and the whole organism.

  11. Image analysis of chest radiographs. Final report

    SciTech Connect

    Hankinson, J.L.

    1982-06-01

    The report demonstrates the feasibility of using a computer for automated interpretation of chest radiographs for pneumoconiosis. The primary goal of this project was to continue testing and evaluating the prototype system with a larger set of films. After review of the final contract report and a review of the current literature, it was clear that several modifications to the prototype system were needed before the project could continue. These modifications can be divided into two general areas. The first area was in improving the stability of the system and compensating for the diversity of film quality which exists in films obtained in a surveillance program. Since the system was to be tested with a large number of films, it was impractical to be extremely selective of film quality. The second area is in terms of processing time. With a large set of films, total processing time becomes much more significant. An image display was added to the system so that the computer determined lung boundaries could be verified for each film. A film handling system was also added, enabling the system to scan films continuously without attendance.

  12. Image analysis of optic nerve disease.

    PubMed

    Burgoyne, C F

    2004-11-01

    Existing methodologies for imaging the optic nerve head surface topography and measuring the retinal nerve fibre layer thickness include confocal scanning laser ophthalmoscopy (Heidelberg retinal tomograph), optical coherence tomography, and scanning laser polarimetry. For cross-sectional screening of patient populations, all three approaches have achieved sensitivities and specificities within the 60-80th percentile in various studies, with occasional specificities greater than 90% in select populations. Nevertheless, these methods are not likely to provide useful assistance for the experienced examiner at their present level of performance. For longitudinal change detection in individual patients, strategies for clinically specific change detection have been rigorously evaluated for confocal scanning laser tomography only. While these initial studies are encouraging, applying these algorithms in larger numbers of patients is now necessary. Future directions for these technologies are likely to include ultra-high resolution optical coherence tomography, the use of neural network/machine learning classifiers to improve clinical decision-making, and the ability to evaluate the susceptibility of individual optic nerve heads to potential damage from a given level of intraocular pressure or systemic blood pressure. PMID:15534606

  13. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  14. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  15. Image quality improvement in megavoltage cone beam CT using an imaging beam line and a sintered pixelated array system

    SciTech Connect

    Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon; Bani-Hashemi, Ali; Anderson, Carryn M.; Bhatia, Sudershan K.; Stiles, Jared; Edwards, Drake S.; Flynn, Ryan T.

    2011-11-15

    Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to which the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging

  16. Wndchrm – an open source utility for biological image analysis

    PubMed Central

    Shamir, Lior; Orlov, Nikita; Eckley, D Mark; Macura, Tomasz; Johnston, Josiah; Goldberg, Ilya G

    2008-01-01

    Background Biological imaging is an emerging field, covering a wide range of applications in biological and clinical research. However, while machinery for automated experimenting and data acquisition has been developing rapidly in the past years, automated image analysis often introduces a bottleneck in high content screening. Methods Wndchrm is an open source utility for biological image analysis. The software works by first extracting image content descriptors from the raw image, image transforms, and compound image transforms. Then, the most informative features are selected, and the feature vector of each image is used for classification and similarity measurement. Results Wndchrm has been tested using several publicly available biological datasets, and provided results which are favorably comparable to the performance of task-specific algorithms developed for these datasets. The simple user interface allows researchers who are not knowledgeable in computer vision methods and have no background in computer programming to apply image analysis to their data. Conclusion We suggest that wndchrm can be effectively used for a wide range of biological image analysis tasks. Using wndchrm can allow scientists to perform automated biological image analysis while avoiding the costly challenge of implementing computer vision and pattern recognition algorithms. PMID:18611266

  17. An Improved Recovery Algorithm for Decayed AES Key Schedule Images

    NASA Astrophysics Data System (ADS)