Science.gov

Sample records for improving image analysis

  1. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  2. Multi-spectral image analysis for improved space object characterization

    NASA Astrophysics Data System (ADS)

    Glass, William; Duggin, Michael J.; Motes, Raymond A.; Bush, Keith A.; Klein, Meiling

    2009-08-01

    The Air Force Research Laboratory (AFRL) is studying the application and utility of various ground-based and space-based optical sensors for improving surveillance of space objects in both Low Earth Orbit (LEO) and Geosynchronous Earth Orbit (GEO). This information can be used to improve our catalog of space objects and will be helpful in the resolution of satellite anomalies. At present, ground-based optical and radar sensors provide the bulk of remotely sensed information on satellites and space debris, and will continue to do so into the foreseeable future. However, in recent years, the Space-Based Visible (SBV) sensor was used to demonstrate that a synthesis of space-based visible data with ground-based sensor data could provide enhancements to information obtained from any one source in isolation. The incentives for space-based sensing include improved spatial resolution due to the absence of atmospheric effects and cloud cover and increased flexibility for observations. Though ground-based optical sensors can use adaptive optics to somewhat compensate for atmospheric turbulence, cloud cover and absorption are unavoidable. With recent advances in technology, we are in a far better position to consider what might constitute an ideal system to monitor our surroundings in space. This work has begun at the AFRL using detailed optical sensor simulations and analysis techniques to explore the trade space involved in acquiring and processing data from a variety of hypothetical space-based and ground-based sensor systems. In this paper, we briefly review the phenomenology and trade space aspects of what might be required in order to use multiple band-passes, sensor characteristics, and observation and illumination geometries to increase our awareness of objects in space.

  3. Multi-Spectral Image Analysis for Improved Space Object Characterization

    NASA Astrophysics Data System (ADS)

    Duggin, M.; Riker, J.; Glass, W.; Bush, K.; Briscoe, D.; Klein, M.; Pugh, M.; Engberg, B.

    The Air Force Research Laboratory (AFRL) is studying the application and utility of various ground based and space-based optical sensors for improving surveillance of space objects in both Low Earth Orbit (LEO) and Geosynchronous Earth Orbit (GEO). At present, ground-based optical and radar sensors provide the bulk of remotely sensed information on satellites and space debris, and will continue to do so into the foreseeable future. However, in recent years, the Space Based Visible (SBV) sensor was used to demonstrate that a synthesis of space-based visible data with ground-based sensor data could provide enhancements to information obtained from any one source in isolation. The incentives for space-based sensing include improved spatial resolution due to the absence of atmospheric effects and cloud cover and increased flexibility for observations. Though ground-based optical sensors can use adaptive optics to somewhat compensate for atmospheric turbulence, cloud cover and absorption are unavoidable. With recent advances in technology, we are in a far better position to consider what might constitute an ideal system to monitor our surroundings in space. This work has begun at the AFRL using detailed optical sensor simulations and analysis techniques to explore the trade space involved in acquiring and processing data from a variety of hypothetical space-based and ground-based sensor systems. In this paper, we briefly review the phenomenology and trade space aspects of what might be required in order to use multiple band-passes, sensor characteristics, and observation and illumination geometries to increase our awareness of objects in space.

  4. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  5. The influence of bonding agents in improving interactions in composite propellants determined using image analysis.

    PubMed

    Dostanić, J; Husović, T V; Usćumlić, G; Heinemann, R J; Mijin, D

    2008-12-01

    Binder-oxidizer interactions in rocket composite propellants can be improved using adequate bonding agents. In the present work, the effectiveness of different 1,3,5-trisubstituted isocyanurates was determined by stereo and metallographic microscopy and using the software package Image-Pro Plus. The chemical analysis of samples was performed by a scanning electron microscope equipped for energy dispersive spectrometry.

  6. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method.

    PubMed

    Lu, Zhaolin; Hu, Xiaojuan; Lu, Yao

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis.

  7. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method

    PubMed Central

    Lu, Zhaolin

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925

  8. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  9. Using image analysis and ArcGIS® to improve automatic grain boundary detection and quantify geological images

    NASA Astrophysics Data System (ADS)

    DeVasto, Michael A.; Czeck, Dyanna M.; Bhattacharyya, Prajukti

    2012-12-01

    Geological images, such as photos and photomicrographs of rocks, are commonly used as supportive evidence to indicate geological processes. A limiting factor to quantifying images is the digitization process; therefore, image analysis has remained largely qualitative. ArcGIS®, the most widely used Geographic Information System (GIS) available, is capable of an array of functions including building models capable of digitizing images. We expanded upon a previously designed model built using Arc ModelBuilder® to quantify photomicrographs and scanned images of thin sections. In order to enhance grain boundary detection, but limit computer processing and hard drive space, we utilized a preprocessing image analysis technique such that only a single image is used in the digitizing model. Preprocessing allows the model to accurately digitize grain boundaries with fewer images and requires less user intervention by using batch processing in image analysis software and ArcCatalog®. We present case studies for five basic textural analyses using a semi-automated digitized image and quantified in ArcMap®. Grain Size Distributions, Shape Preferred Orientations, Weak phase connections (networking), and Nearest Neighbor statistics are presented in a simplified fashion for further analyses directly obtainable from the automated digitizing method. Finally, we discuss the ramifications for incorporating this method into geological image analyses.

  10. Improved MALDI imaging MS analysis of phospholipids using graphene oxide as new matrix

    PubMed Central

    Wang, Zhongjie; Cai, Yan; Wang, Yi; Zhou, Xinwen; Zhang, Ying; Lu, Haojie

    2017-01-01

    Matrix assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS) is an increasingly important technique for detection and spatial localization of phospholipids on tissue. Due to the high abundance and being easy-to-ionize of phosphatidylcholine (PC), therefore, selecting matrix to yield signals of other lipids has become the most crucial factor for a successful MALDI-IMS analysis of phospholipids. Herein, graphene oxide (GO) was proposed as a new matrix to selectively enhance the detection of other types of phospholipids that are frequently suppressed by the presence of PC in positive mode. Compared to the commonly used matrix DHB, GO matrix significantly improved signal-to-noise ratios of phospholipids as a result of its high desorption/ionization efficiency for nonpolar compounds. Also, GO afforded homogeneous crystallizations with analytes due to its monolayer structure and good dispersion, resulting in better reproducibility of shot-to-shot (CV < 13%) and spot-to-spot (CV < 14%) analysis. Finally, GO matrix was successfully applied to simultaneous imaging of PC, PE, PS and glycosphingolipid in the mouse brain, with a total of 65 phospholipids identified. PMID:28294158

  11. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  12. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility

    PubMed Central

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H.

    2016-01-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability. PMID:27581626

  13. Improved Linear Contrast-Enhanced Ultrasound Imaging via Analysis of First-Order Speckle Statistics.

    PubMed

    Lowerison, Matthew R; Hague, M Nicole; Chambers, Ann F; Lacefield, James C

    2016-09-01

    The linear subtraction methods commonly used for preclinical contrast-enhanced imaging are susceptible to registration errors and motion artifacts that lead to reduced contrast-to-tissue ratios. To address this limitation, a new approach to linear contrast-enhanced ultrasound (CEUS) is proposed based on the analysis of the temporal dynamics of the speckle statistics during wash-in of a bolus injection of microbubbles. In the proposed method, the speckle signal is approximated as a mixture of temporally varying random processes, representing the microbubble signal, superimposed onto spatially heterogeneous tissue backscatter in multiple subvolumes within the region of interest. A wash-in curve is constructed by plotting the effective degrees of freedom (EDoFs) of the histogram of the speckle signal as a function of time. The proposed method is, therefore, named the EDoF method. The EDoF parameter is proportional to the shape parameter of the Nakagami distribution. Images acquired at 18 MHz from a murine mammary fat pad breast cancer xenograft model were processed using gold-standard nonlinear amplitude modulation, conventional linear subtraction, and the proposed statistical method. The EDoF method shows promise for improving the robustness of linear CEUS based on reduced frame-to-frame variability compared with the conventional linear subtraction time-intensity curves. Wash-in curve parameters estimated using the EDoF method also demonstrate higher correlation to nonlinear CEUS than the conventional linear method. The conceptual basis of the statistical method implies that EDoF wash-in curves may carry information about vascular complexity that could provide valuable new imaging biomarkers for cancer research.

  14. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  15. Basics of image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  16. Non-rigid registration and non-local principle component analysis to improve electron microscopy spectrum images

    NASA Astrophysics Data System (ADS)

    Yankovich, Andrew B.; Zhang, Chenyu; Oh, Albert; Slater, Thomas J. A.; Azough, Feridoon; Freer, Robert; Haigh, Sarah J.; Willett, Rebecca; Voyles, Paul M.

    2016-09-01

    Image registration and non-local Poisson principal component analysis (PCA) denoising improve the quality of characteristic x-ray (EDS) spectrum imaging of Ca-stabilized Nd2/3TiO3 acquired at atomic resolution in a scanning transmission electron microscope. Image registration based on the simultaneously acquired high angle annular dark field image significantly outperforms acquisition with a long pixel dwell time or drift correction using a reference image. Non-local Poisson PCA denoising reduces noise more strongly than conventional weighted PCA while preserving atomic structure more faithfully. The reliability of and optimal internal parameters for non-local Poisson PCA denoising of EDS spectrum images is assessed using tests on phantom data.

  17. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  18. Method for Removing Spectral Contaminants to Improve Analysis of Raman Imaging Data

    PubMed Central

    Zhang, Xun; Chen, Sheng; Ling, Zhe; Zhou, Xia; Ding, Da-Yong; Kim, Yoon Soo; Xu, Feng

    2017-01-01

    The spectral contaminants are inevitable during micro-Raman measurements. A key challenge is how to remove them from the original imaging data, since they can distort further results of data analysis. Here, we propose a method named “automatic pre-processing method for Raman imaging data set (APRI)”, which includes the adaptive iteratively reweighted penalized least-squares (airPLS) algorithm and the principal component analysis (PCA). It eliminates the baseline drifts and cosmic spikes by using the spectral features themselves. The utility of APRI is illustrated by removing the spectral contaminants from a Raman imaging data set of a wood sample. In addition, APRI is computationally efficient, conceptually simple and potential to be extended to other methods of spectroscopy, such as infrared (IR), nuclear magnetic resonance (NMR), X-Ray Diffraction (XRD). With the help of our approach, a typical spectral analysis can be performed by a non-specialist user to obtain useful information from a spectroscopic imaging data set. PMID:28054587

  19. Improved triangular prism methods for fractal analysis of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Fung, Tung; Leung, Yee

    2016-05-01

    Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.

  20. Improved factor analysis of dynamic PET images to estimate arterial input function and tissue curves

    NASA Astrophysics Data System (ADS)

    Boutchko, Rostyslav; Mitra, Debasis; Pan, Hui; Jagust, William; Gullberg, Grant T.

    2015-03-01

    Factor analysis of dynamic structures (FADS) is a methodology of extracting time-activity curves (TACs) for corresponding different tissue types from noisy dynamic images. The challenges of FADS include long computation time and sensitivity to the initial guess, resulting in convergence to local minima far from the true solution. We propose a method of accelerating and stabilizing FADS application to sequences of dynamic PET images by adding preliminary cluster analysis of the time activity curves for individual voxels. We treat the temporal variation of individual voxel concentrations as a set of time-series and use a partial clustering analysis to identify the types of voxel TACs that are most functionally distinct from each other. These TACs provide a good initial guess for the temporal factors for subsequent FADS processing. Applying this approach to a set of single slices of dynamic 11C-PIB images of the brain allows identification of the arterial input function and two different tissue TACs that are likely to correspond to the specific and non-specific tracer binding-tissue types. These results enable us to perform direct classification of tissues based on their pharmacokinetic properties in dynamic PET without relying on a compartment-based kinetic model, without identification of the reference region, or without using any external methods of estimating the arterial input function, as needed in some techniques.

  1. Analysis of explosion in enclosure based on improved method of images

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Guo, J.; Yao, X.; Chen, G.; Zhu, X.

    2017-03-01

    The aim of this paper is to present an improved method to calculate the pressure loading on walls during a confined explosion. When an explosion occurs inside of an enclosure, reflected shock waves produce multiple pressure peaks at a given wall location, especially at the corners. The effects of confined blast loading may bring about more serious damage to the structure due to multiple shock reflection. An approach, first proposed by Chan to describe the track of shock waves based on the mirror reflecting theory, using the method of images (MOI) is proposed to simplify internal explosion loading calculations. An improved method of images is proposed that takes into account wall openings and oblique reflections that cannot be considered with the standard MOI. The approach, validated using experimental data, provides a simplified and quick approach for loading calculation of a confined explosion. The results show that the peak overpressure tends to decline as the measurement point moves away from the center, and increases sharply as it approaches the enclosure corners. The specific impulse increases from the center to the corners. The improved method is capable of predicting pressure-time history and impulse with an accuracy comparable to that of three-dimensional AUTODYN code predictions.

  2. An improved method for liver diseases detection by ultrasound image analysis.

    PubMed

    Owjimehr, Mehri; Danyali, Habibollah; Helfroush, Mohammad Sadegh

    2015-01-01

    Ultrasound imaging is a popular and noninvasive tool frequently used in the diagnoses of liver diseases. A system to characterize normal, fatty and heterogeneous liver, using textural analysis of liver Ultrasound images, is proposed in this paper. The proposed approach is able to select the optimum regions of interest of the liver images. These optimum regions of interests are analyzed by two level wavelet packet transform to extract some statistical features, namely, median, standard deviation, and interquartile range. Discrimination between heterogeneous, fatty and normal livers is performed in a hierarchical approach in the classification stage. This stage, first, classifies focal and diffused livers and then distinguishes between fatty and normal ones. Support vector machine and k-nearest neighbor classifiers have been used to classify the images into three groups, and their performance is compared. The Support vector machine classifier outperformed the compared classifier, attaining an overall accuracy of 97.9%, with a sensitivity of 100%, 100% and 95.1% for the heterogeneous, fatty and normal class, respectively. The Acc obtained by the proposed computer-aided diagnostic system is quite promising and suggests that the proposed system can be used in a clinical environment to support radiologists and experts in liver diseases interpretation.

  3. Improved quality control of silicon wafers using novel off-line air pocket image analysis

    NASA Astrophysics Data System (ADS)

    Valley, John F.; Sanna, M. Cristina

    2014-08-01

    Air pockets (APK) occur randomly in Czochralski (Cz) grown silicon (Si) crystals and may become included in wafers after slicing and polishing. Previously the only APK of interest were those that intersected the front surface of the wafer and therefore directly impacted device yield. However mobile and other electronics have placed new demands on wafers to be internally APK-free for reasons of thermal management and packaging yield. We present a novel, recently patented, APK image processing technique and demonstrate the use of that technique, off-line, to improve quality control during wafer manufacturing.

  4. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  5. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  6. Laser speckle contrast analysis (LASCA) for blood flow visualization: improved image processing

    NASA Astrophysics Data System (ADS)

    Briers, J. D.; He, Xiao-Wei

    1998-06-01

    LASCA is a single-exposure, full-field technique for mapping flow velocities. The motion of the particles in a fluid flow causes fluctuations in the speckle patten produced when laser light is scattered by the particles. The frequency of these intensity fluctuations increases with increasing velocity. These intensity fluctuations blur the speckle pattern and hence reduce its contrast. With a suitable integration time for the exposure, velocity can be mapped as speckle contrast. The equipment required is very simple. A CCD camera and a framegrabber capture an image of the area of interest. The local speckle contrast is computed and used to produce a false-color map of velocities. LASCA can be used to map capillary blood flow. The results are similar to those obtained by the scanning laser Doppler technique, but are obtained without the need to scan. This reduces the time needed for capturing the image from several minutes to a fraction of a second, already a clinical advantage. Until recently, however, processing the captured image did take several minutes. Improvements in the software have now reduced the processing tome to one second, thus providing a truly real-time method for obtaining a map of capillary blood flow.

  7. Jeffries Matusita based mixed-measure for improved spectral matching in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Padma, S.; Sanjeevi, S.

    2014-10-01

    This paper proposes a novel hyperspectral matching technique by integrating the Jeffries-Matusita measure (JM) and the Spectral Angle Mapper (SAM) algorithm. The deterministic Spectral Angle Mapper and stochastic Jeffries-Matusita measure are orthogonally projected using the sine and tangent functions to increase their spectral ability. The developed JM-SAM algorithm is implemented in effectively discriminating the landcover classes and cover types in the hyperspectral images acquired by PROBA/CHRIS and EO-1 Hyperion sensors. The reference spectra for different land-cover classes were derived from each of these images. The performance of the proposed measure is compared with the performance of the individual SAM and JM approaches. From the values of the relative spectral discriminatory probability (RSDPB) and relative discriminatory entropy value (RSDE), it is inferred that the hybrid JM-SAM approach results in a high spectral discriminability than the SAM and JM measures. Besides, the use of the improved JM-SAM algorithm for supervised classification of the images results in 92.9% and 91.47% accuracy compared to 73.13%, 79.41%, and 85.69% of minimum-distance, SAM and JM measures. It is also inferred that the increased spectral discriminability of JM-SAM measure is contributed by the JM distance. Further, it is seen that the proposed JM-SAM measure is compatible with varying spectral resolutions of PROBA/CHRIS (62 bands) and Hyperion (242 bands).

  8. Improved texture analysis for automatic detection of tuberculosis (TB) on chest radiographs with bone suppression images

    NASA Astrophysics Data System (ADS)

    Maduskar, Pragnya; Hogeweg, Laurens; Philipsen, Rick; Schalekamp, Steven; van Ginneken, Bram

    2013-03-01

    Computer aided detection (CAD) of tuberculosis (TB) on chest radiographs (CXR) is challenging due to over-lapping structures. Suppression of normal structures can reduce overprojection effects and can enhance the appearance of diffuse parenchymal abnormalities. In this work, we compare two CAD systems to detect textural abnormalities in chest radiographs of TB suspects. One CAD system was trained and tested on the original CXR and the other CAD system was trained and tested on bone suppression images (BSI). BSI were created using a commercially available software (ClearRead 2.4, Riverain Medical). The CAD system is trained with 431 normal and 434 abnormal images with manually outlined abnormal regions. Subtlety rating (1-3) is assigned to each abnormal region, where 3 refers to obvious and 1 refers to subtle abnormalities. Performance is evaluated on normal and abnormal regions from an independent dataset of 900 images. These contain in total 454 normal and 1127 abnormal regions, which are divided into 3 subtlety categories containing 280, 527 and 320 abnormal regions, respectively. For normal regions, original/BSI CAD has an average abnormality score of 0.094+/-0.027/0.085+/-0.032 (p - 5.6×10-19). For abnormal regions, subtlety 1, 2, 3 categories have average abnormality scores for original/BSI of 0.155+/-0.073/0.156+/-0.089 (p = 0.73), 0.194+/-0.086/0.207+/-0.101 (p = 5.7×10-7), 0.225+/-0.119/0.247+/-0.117 (p = 4.4×10-7), respectively. Thus for normal regions, CAD scores slightly decrease when using BSI instead of the original images, and for abnormal regions, the scores increase slightly. We therefore conclude that the use of bone suppression results in slightly but significantly improved automated detection of textural abnormalities in chest radiographs.

  9. Temporal change analysis for improved tumor detection in dedicated CT breast imaging using affine and free-form deformation

    NASA Astrophysics Data System (ADS)

    Dey, Joyoni; O'Connor, J. Michael; Chen, Yu; Glick, Stephen J.

    2008-03-01

    Preliminary evidence has suggested that computerized tomographic (CT) imaging of the breast using a cone-beam, flat-panel detector system dedicated solely to breast imaging has potential for improving detection and diagnosis of early-stage breast cancer. Hypothetically, a powerful mechanism for assisting in early stage breast cancer detection from annual screening breast CT studies would be to examine temporal changes in the breast from year-to-year. We hypothesize that 3D image registration could be used to automatically register breast CT volumes scanned at different times (e.g., yearly screening exams). This would allow radiologists to quickly visualize small changes in the breast that have developed during the period since the last screening CT scan, and use this information to improve the diagnostic accuracy of early-stage breast cancer detection. To test our hypothesis, fresh mastectomy specimens were imaged with a flat-panel CT system at different time points, after moving the specimen to emulate the re-positioning motion of the breast between yearly screening exams. Synthetic tumors were then digitally inserted into the second CT scan at a clinically realistic location (to emulate tumor growth from year-to-year). An affine and a spline-based 3D image registration algorithm was implemented and applied to the CT reconstructions of the specimens acquired at different times. Subtraction of registered image volumes was then performed to better analyze temporal change. Results from this study suggests that temporal change analysis in 3D breast CT can potentially be a powerful tool in improving the visualization of small lesion growth.

  10. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects.

    PubMed

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-21

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  11. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects

    NASA Astrophysics Data System (ADS)

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-01

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  12. Rapid Prototyping of Hyperspectral Image Analysis Algorithms for Improved Invasive Species Decision Support Tools

    NASA Astrophysics Data System (ADS)

    Bruce, L. M.; Ball, J. E.; Evangilista, P.; Stohlgren, T. J.

    2006-12-01

    Nonnative invasive species adversely impact ecosystems, causing loss of native plant diversity, species extinction, and impairment of wildlife habitats. As a result, over the past decade federal and state agencies and nongovernmental organizations have begun to work more closely together to address the management of invasive species. In 2005, approximately 500M dollars was budgeted by U.S. Federal Agencies for the management of invasive species. Despite extensive expenditures, most of the methods used to detect and quantify the distribution of these invaders are ad hoc, at best. Likewise, decisions on the type of management techniques to be used or evaluation of the success of these methods are typically non-systematic. More efficient methods to detect or predict the occurrence of these species, as well as the incorporation of this knowledge into decision support systems, are greatly needed. In this project, rapid prototyping capabilities (RPC) are utilized for an invasive species application. More precisely, our recently developed analysis techniques for hyperspectral imagery are being prototyped for inclusion in the national Invasive Species Forecasting System (ISFS). The current ecological forecasting tools in ISFS will be compared to our hyperspectral-based invasives prediction algorithms to determine if/how the newer algorithms enhance the performance of ISFS. The PIs have researched the use of remotely sensed multispectral and hyperspectral reflectance data for the detection of invasive vegetative species. As a result, the PI has designed, implemented, and benchmarked various target detection systems that utilize remotely sensed data. These systems have been designed to make decisions based on a variety of remotely sensed data, including high spectral/spatial resolution hyperspectral signatures (1000's of spectral bands, such as those measured using ASD handheld devices), moderate spectral/spatial resolution hyperspectral images (100's of spectral bands, such

  13. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method.

  14. Novel analysis for improved validity in semi-quantitative 2-deoxyglucose autoradiographic imaging.

    PubMed

    Dawson, Neil; Ferrington, Linda; Olverman, Henry J; Kelly, Paul A T

    2008-10-30

    The original [(14)C]-2-deoxyglucose autoradiographic imaging technique allows for the quantitative determination of local cerebral glucose utilisation (LCMRglu) [Sokoloff L, Reivich, M, Kennedy C, Desrosiers M, Patlak C, Pettigrew K, et al. The 2-deoxyglucose-C-14 method for measurement of local cerebral glucose utilisation-theory, procedure and normal values in conscious and anestherized albino rats. J Neurochem 1977;28:897-916]. The range of applications to which the quantitative method can be readily applied is limited, however, by the requirement for the intermittent measurement of arterial radiotracer and glucose concentrations throughout the experiment, via intravascular cannulation. Some studies have applied a modified, semi-quantitative approach to estimate LCMRglu while circumventing the requirement for intravascular cannulation [Kelly S, Bieneman A, Uney J, McCulloch J. Cerebral glucose utilization in transgenic mice over-expressing heat shock protein 70 is altered by dizocilpine. Eur J Neurosci 2002;15(6):945-52; Jordan GR, McCulloch J, Shahid M, Hill DR, Henry B, Horsburgh K. Regionally selective and dose-dependent effects of the ampakines Org 26576 and Org 24448 on local cerebral glucose utilisation in the mouse as assessed by C-14-2-deoxyglucose autoradiography. Neuropharmacology 2005;49(2):254-64]. In this method only a terminal blood sample is collected for the determination of plasma [(14)C] and [glucose] and the rate of LCMRglu in each brain region of interest (RoI) is estimated by comparing the [(14)C] concentration in each region relative to a selected control region, which is proposed to demonstrate metabolic stability between the experimental groups. Here we show that the semi-quantitative method has reduced validity in the measurement of LCMRglu as compared to the quantitative method and that the validity of this technique is further compromised by the inability of the methods applied within the analysis to appropriately determine metabolic

  15. Computerized multiple image analysis on mammograms: performance improvement of nipple identification for registration of multiple views using texture convergence analyses

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana

    2004-05-01

    Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be

  16. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  17. Improvement of CAT scanned images

    NASA Technical Reports Server (NTRS)

    Roberts, E., Jr.

    1980-01-01

    Digital enhancement procedure improves definition of images. Tomogram is generated from large number of X-ray beams. Beams are collimated and small in diameter. Scanning device passes beams sequentially through human subject at many different angles. Battery of transducers opposite subject senses attenuated signals. Signals are transmitted to computer where they are used in construction of image on transverse plane through body.

  18. Improved Digital Image Correlation method

    NASA Astrophysics Data System (ADS)

    Mudassar, Asloob Ahmad; Butt, Saira

    2016-12-01

    Digital Image Correlation (DIC) is a powerful technique which is used to correlate two image segments to determine the similarity between them. A correlation image is formed which gives a peak known as correlation peak. If the two image segments are identical the peak is known as auto-correlation peak otherwise it is known as cross correlation peak. The location of the peak in a correlation image gives the relative displacement between the two image segments. Use of DIC for in-plane displacement and deformation measurements in Electronic Speckle Photography (ESP) is well known. In ESP two speckle images are correlated using DIC and relative displacement is measured. We are presenting background review of ESP and disclosing a technique based on DIC for improved relative measurements which we regard as the improved DIC method. Simulation and experimental results reveal that the proposed improved-DIC method is superior to the conventional DIC method in two aspects, in resolution and in the availability of reference position in displacement measurements.

  19. Improved defect analysis of Gallium Arsenide solar cells using image enhancement

    NASA Technical Reports Server (NTRS)

    Kilmer, Louis C.; Honsberg, Christiana; Barnett, Allen M.; Phillips, James E.

    1989-01-01

    A new technique has been developed to capture, digitize, and enhance the image of light emission from a forward biased direct bandgap solar cell. Since the forward biased light emission from a direct bandgap solar cell has been shown to display both qualitative and quantitative information about the solar cell's performance and its defects, signal processing techniques can be applied to the light emission images to identify and analyze shunt diodes. Shunt diodes are of particular importance because they have been found to be the type of defect which is likely to cause failure in a GaAs solar cell. The presence of a shunt diode can be detected from the light emission by using a photodetector to measure the quantity of light emitted at various current densities. However, to analyze how the shunt diodes affect the quality of the solar cell the pattern of the light emission must be studied. With the use of image enhancement routines, the light emission can be studied at low light emission levels where shunt diode effects are dominant.

  20. Applying Chemical Imaging Analysis to Improve Our Understanding of Cold Cloud Formation

    NASA Astrophysics Data System (ADS)

    Laskin, A.; Knopf, D. A.; Wang, B.; Alpert, P. A.; Roedel, T.; Gilles, M. K.; Moffet, R.; Tivanski, A.

    2012-12-01

    The impact that atmospheric ice nucleation has on the global radiation budget is one of the least understood problems in atmospheric sciences. This is in part due to the incomplete understanding of various ice nucleation pathways that lead to ice crystal formation from pre-existing aerosol particles. Studies investigating the ice nucleation propensity of laboratory generated particles indicate that individual particle types are highly selective in their ice nucleating efficiency. This description of heterogeneous ice nucleation would present a challenge when applying to the atmosphere which contains a complex mixture of particles. Here, we employ a combination of micro-spectroscopic and optical single particle analytical methods to relate particle physical and chemical properties with observed water uptake and ice nucleation. Field-collected particles from urban environments impacted by anthropogenic and marine emissions and aging processes are investigated. Single particle characterization is provided by computer controlled scanning electron microscopy with energy dispersive analysis of X-rays (CCSEM/EDX) and scanning transmission X-ray microscopy with near edge X-ray absorption fine structure spectroscopy (STXM/NEXAFS). A particle-on-substrate approach coupled to a vapor controlled cooling-stage and a microscope system is applied to determine the onsets of water uptake and ice nucleation including immersion freezing and deposition ice nucleation as a function of temperature (T) as low as 200 K and relative humidity (RH) up to water saturation. We observe for urban aerosol particles that for T > 230 K the oxidation level affects initial water uptake and that subsequent immersion freezing depends on particle mixing state, e.g. by the presence of insoluble particles. For T < 230 K the particles initiate deposition ice nucleation well below the homogeneous freezing limit. Particles collected throughout one day for similar meteorological conditions show very similar

  1. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  2. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  3. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  4. A Bayesian approach to image expansion for improved definition.

    PubMed

    Schultz, R R; Stevenson, R L

    1994-01-01

    Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.

  5. Endoscopy imaging intelligent contrast improvement.

    PubMed

    Sheraizin, S; Sheraizin, V

    2005-01-01

    In this paper, we present a medical endoscopy video contrast improvement method that provides intelligent automatic adaptive contrast control. The method fundamentals are video data clustering and video data histogram modification. The video data clustering allows an effective use the low noise two channel contrast enhancement processing. The histogram analysis permitted to determine the video exposure type for simple and complicated contrast distribution. We determined the needed gamma value for automatic local area contrast improvement for the following exposure types: dark, normal, light, dark light, dark normal etc. The experimental results of medical endoscopy video processing allow defining the automatic gamma control range from 0.5 to 2.0.

  6. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  7. Principal component analysis with pre-normalization improves the signal-to-noise ratio and image quality in positron emission tomography studies of amyloid deposits in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Razifar, Pasha; Engler, Henry; Blomquist, Gunnar; Ringheim, Anna; Estrada, Sergio; Långström, Bengt; Bergström, Mats

    2009-06-01

    This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [11C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([11C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [11C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.

  8. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  9. Principles and clinical applications of image analysis.

    PubMed

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  10. Improved radiographic image amplifier panel

    NASA Technical Reports Server (NTRS)

    Brown, R. L., Sr.

    1968-01-01

    Layered image amplifier for radiographic /X ray and gamma ray/ applications, combines very high radiation sensitivity with fast image buildup and erasure capabilities by adding a layer of material that is both photoconductive and light-emitting to basic image amplifier and cascading this assembly with a modified Thorne panel.

  11. Suggestions for automatic quantitation of endoscopic image analysis to improve detection of small intestinal pathology in celiac disease patients.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-10-01

    Although many groups have attempted to develop an automated computerized method to detect pathology of the small intestinal mucosa caused by celiac disease, the efforts have thus far failed. This is due in part to the occult presence of the disease. When pathological evidence of celiac disease exists in the small bowel it is visually often patchy and subtle. Due to presence of extraneous substances such as air bubbles and opaque fluids, the use of computerized automation methods have only been partially successful in detecting the hallmarks of the disease in the small intestine-villous atrophy, fissuring, and a mottled appearance. By using a variety of computerized techniques and assigning a weight or vote to each technique, it is possible to improve the detection of abnormal regions which are indicative of celiac disease, and of treatment progress in diagnosed patients. Herein a paradigm is suggested for improving the efficacy of automated methods for measuring celiac disease manifestation in the small intestinal mucosa. The suggestions are applicable to both standard and videocapsule endoscopic imaging, since both methods could potentially benefit from computerized quantitation to improve celiac disease diagnosis.

  12. Multisensor Image Analysis System

    DTIC Science & Technology

    1993-04-15

    AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald

  13. Towards an Improved Algorithm for Estimating Freeze-Thaw Dates of a High Latitude Lake Using Texture Analysis of SAR Images

    NASA Astrophysics Data System (ADS)

    Uppuluri, A. V.; Jost, R. J.; Luecke, C.; White, M. A.

    2008-12-01

    Analyzing the freeze-thaw dates of high latitude lakes is an important part of climate change studies. Due to the various advantages provided by the use of SAR images, with respect to remote monitoring of small lakes, SAR image analysis is an obvious choice to estimate lake freeze-thaw dates. An important property of SAR images is its texture. The problem of estimating freeze-thaw dates can be restated as a problem of classifying an annual time series of SAR images based on the presence or absence of ice. We analyzed a few algorithms based on texture to improve the estimation of freeze-thaw dates for small lakes using SAR images. We computed the Gray Level Co-occurrence Matrix (GLCM) for each image and extracted ten different texture features from the GLCM. We used these texture features (namely, Energy, Contrast, Correlation, Homogeneity, Entropy, Autocorrelation, Dissimilarity, Cluster Shade, Cluster Prominence and Maximum Probability as previously used in studies related to the texture analysis of SAR sea ice imagery) as input to a group of classification algorithms to find the most accurate classifier and set of texture features that can help to decide the presence or absence of ice on the lake. The accuracy of the estimated freeze-thaw dates is dependent on the accuracy of the classifier. It is considered highly difficult to differentiate between open water (without wind) and the first day of ice formed on the lake (due to the similar mean backscatter values) causing inaccuracy in calculating the freeze date. Similar inaccuracy in calculating the thaw date arise due to the close backscatter values of water (with wind) and later stages of ice on the lake. Our method is promising but requires further research in improving the accuracy of the classifiers and selecting the input features.

  14. Multiresponse imaging system design for improved resolution

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1991-01-01

    Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/sq rt A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.

  15. Polarization image fusion algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Zhang, Siyuan; Yuan, Yan; Su, Lijuan; Hu, Liang; Liu, Hui

    2013-12-01

    The polarization detection technique provides polarization information of objects which conventional detection techniques are unable to obtain. In order to fully utilize of obtained polarization information, various polarization imagery fusion algorithms have been developed. In this research, we proposed a polarization image fusion algorithm based on the improved pulse coupled neural network (PCNN). The improved PCNN algorithm uses polarization parameter images to generate the fused polarization image with object details for polarization information analysis and uses the matching degree M as the fusion rule. The improved PCNN fused image is compared with fused images based on Laplacian pyramid (LP) algorithm, Wavelet algorithm and PCNN algorithm. Several performance indicators are introduced to evaluate the fused images. The comparison showed the presented algorithm yields image with much higher quality and preserves more detail information of the objects.

  16. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  17. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging.

  18. Tissue Doppler Imaging Combined with Advanced 12-Lead ECG Analysis Might Improve Early Diagnosis of Hypertrophic Cardiomyopathy in Childhood

    NASA Technical Reports Server (NTRS)

    Femlund, E.; Schlegel, T.; Liuba, P.

    2011-01-01

    Optimization of early diagnosis of childhood hypertrophic cardiomyopathy (HCM) is essential in lowering the risk of HCM complications. Standard echocardiography (ECHO) has shown to be less sensitive in this regard. In this study, we sought to assess whether spatial QRS-T angle deviation, which has shown to predict HCM in adults with high sensitivity, and myocardial Tissue Doppler Imaging (TDI) could be additional tools in early diagnosis of HCM in childhood. Methods: Children and adolescents with familial HCM (n=10, median age 16, range 5-27 years), and without obvious hypertrophy but with heredity for HCM (n=12, median age 16, range 4-25 years, HCM or sudden death with autopsy-verified HCM in greater than or equal to 1 first-degree relative, HCM-risk) were additionally investigated with TDI and advanced 12-lead ECG analysis using Cardiax(Registered trademark) (IMED Co Ltd, Budapest, Hungary and Houston). Spatial QRS-T angle (SA) was derived from Kors regression-related transformation. Healthy age-matched controls (n=21) were also studied. All participants underwent thorough clinical examination. Results: Spatial QRS-T angle (Figure/ Panel A) and septal E/Ea ratio (Figure/Panel B) were most increased in HCM group as compared to the HCM-risk and control groups (p less than 0.05). Of note, these 2 variables showed a trend toward higher levels in HCM-risk group than in control group (p=0.05 for E/Ea and 0.06 for QRS/T by ANOVA). In a logistic regression model, increased SA and septal E/Ea ratio appeared to significantly predict both the disease (Chi-square in HCM group: 9 and 5, respectively, p less than 0.05 for both) and the risk for HCM (Chi-square in HCM-risk group: 5 and 4 respectively, p less than 0.05 for both), with further increased predictability level when these 2 variables were combined (Chi-square 10 in HCM group, and 7 in HCM-risk group, p less than 0.01 for both). Conclusions: In this small material, Tissue Doppler Imaging and spatial mean QRS-T angle

  19. Improved real-time imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)

    1993-01-01

    An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.

  20. Improved Interactive Medical-Imaging System

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  1. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  2. Advanced endoscopic imaging to improve adenoma detection

    PubMed Central

    Neumann, Helmut; Nägel, Andreas; Buda, Andrea

    2015-01-01

    Advanced endoscopic imaging is revolutionizing our way on how to diagnose and treat colorectal lesions. Within recent years a variety of modern endoscopic imaging techniques was introduced to improve adenoma detection rates. Those include high-definition imaging, dye-less chromoendoscopy techniques and novel, highly flexible endoscopes, some of them equipped with balloons or multiple lenses in order to improve adenoma detection rates. In this review we will focus on the newest developments in the field of colonoscopic imaging to improve adenoma detection rates. Described techniques include high-definition imaging, optical chromoendoscopy techniques, virtual chromoendoscopy techniques, the Third Eye Retroscope and other retroviewing devices, the G-EYE endoscope and the Full Spectrum Endoscopy-system. PMID:25789092

  3. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  4. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  5. Improvement of image quality in holographic microscopy.

    PubMed

    Budhiraja, C J; Som, S C

    1981-05-15

    A novel technique of noise reduction in holographic microscopy has been experimentally studied. It has been shown that significant improvement in the holomicroscopic images of actual low-contrast continuous tone biological objects can be achieved without trade off in image resolution. The technique makes use of holographically produced multidirectional phase gratings used as diffusers and the continuous addition of subchannel holograms. It has been shown that the self-imaging property of this type of diffuser makes the use of these diffusers ideal for microscopic objects. Experimental results have also been presented to demonstrate real-time image processing capability of this technique.

  6. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  7. An improved image reconstruction method for optical intensity correlation Imaging

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Feng, Lingjie; Li, Xiyu

    2016-12-01

    The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.

  8. Improvement of ultrasound speckle image velocimetry using image enhancement techniques.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Paeng, Dong-Guk; Lee, Sang Joon

    2014-01-01

    Ultrasound-based techniques have been developed and widely used in noninvasive measurement of blood velocity. Speckle image velocimetry (SIV), which applies a cross-correlation algorithm to consecutive B-mode images of blood flow has often been employed owing to its better spatial resolution compared with conventional Doppler-based measurement techniques. The SIV technique utilizes speckles backscattered from red blood cell (RBC) aggregates as flow tracers. Hence, the intensity and size of such speckles are highly dependent on hemodynamic conditions. The grayscale intensity of speckle images varies along the radial direction of blood vessels because of the shear rate dependence of RBC aggregation. This inhomogeneous distribution of echo speckles decreases the signal-to-noise ratio (SNR) of a cross-correlation analysis and produces spurious results. In the present study, image-enhancement techniques such as contrast-limited adaptive histogram equalization (CLAHE), min/max technique, and subtraction of background image (SB) method were applied to speckle images to achieve a more accurate SIV measurement. A mechanical sector ultrasound scanner was used to obtain ultrasound speckle images from rat blood under steady and pulsatile flows. The effects of the image-enhancement techniques on SIV analysis were evaluated by comparing image intensities, velocities, and cross-correlation maps. The velocity profiles and wall shear rate (WSR) obtained from RBC suspension images were compared with the analytical solution for validation. In addition, the image-enhancement techniques were applied to in vivo measurement of blood flow in human vein. The experimental results of both in vitro and in vivo SIV measurements show that the intensity gradient in heterogeneous speckles has substantial influence on the cross-correlation analysis. The image-enhancement techniques used in this study can minimize errors encountered in ultrasound SIV measurement in which RBCs are used as flow

  9. Digital Image Analysis of Cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  10. Early time instability in nanofilms exposed to a large transverse thermal gradient: Improved image and thermal analysis

    NASA Astrophysics Data System (ADS)

    Fiedler, Kevin R.; Troian, Sandra M.

    2016-11-01

    Liquid nanofilms exposed to a large transverse thermal gradient undergo an instability featuring an array of nanopillars whose typical pitch is tens of microns. In earlier works, a comparison of this pitch with the fastest growing wavelength predicted by three different models based on linear instability showed closest agreement with a long wavelength thermocapillary mechanism in which gravity plays no role. Here, we present improved feature extraction techniques, which allow identification of the fastest growing wavelength at much earlier times than previously reported, and more realistic simulations for assessing thermal gradients, which better approximate the actual experimental system. While these improvements lead to better agreement with the thermocapillary mechanism, there persists a quantitative discrepancy with theory which we attribute to a number of experimental challenges.

  11. An improved imaging method for extended targets

    NASA Astrophysics Data System (ADS)

    Hou, Songming; Zhang, Sui

    2017-03-01

    When we use direct imaging methods for solving inverse scattering problems, we observe artificial lines which make it hard to determine the shape of targets. We propose a signal space test to study the cause of the artificial lines and we use multiple frequency data to reduce the effect from them. Finally, we use the active contour without edges (ACWE) method to further improve the imaging results. The final reconstruction is accurate and robust. The computational cost is lower than iterative imaging methods which need to solve the forward problem in each iteration.

  12. Improving photoacoustic imaging contrast of brachytherapy seeds

    NASA Astrophysics Data System (ADS)

    Pan, Leo; Baghani, Ali; Rohling, Robert; Abolmaesumi, Purang; Salcudean, Septimiu; Tang, Shuo

    2013-03-01

    Prostate brachytherapy is a form of radiotherapy for treating prostate cancer where the radiation sources are seeds inserted into the prostate. Accurate localization of seeds during prostate brachytherapy is essential to the success of intraoperative treatment planning. The current standard modality used in intraoperative seeds localization is transrectal ultrasound. Transrectal ultrasound, however, suffers in image quality due to several factors such speckle, shadowing, and off-axis seed orientation. Photoacoustic imaging, based on the photoacoustic phenomenon, is an emerging imaging modality. The contrast generating mechanism in photoacoustic imaging is optical absorption that is fundamentally different from conventional B-mode ultrasound which depicts changes in acoustic impedance. A photoacoustic imaging system is developed using a commercial ultrasound system. To improve imaging contrast and depth penetration, absorption enhancing coating is applied to the seeds. In comparison to bare seeds, approximately 18.5 dB increase in signal-to-noise ratio as well as a doubling of imaging depth are achieved. Our results demonstrate that the coating of the seeds can further improve the discernibility of the seeds.

  13. Improved Guided Image Fusion for Magnetic Resonance and Computed Tomography Imaging

    PubMed Central

    Jameel, Amina

    2014-01-01

    Improved guided image fusion for magnetic resonance and computed tomography imaging is proposed. Existing guided filtering scheme uses Gaussian filter and two-level weight maps due to which the scheme has limited performance for images having noise. Different modifications in filter (based on linear minimum mean square error estimator) and weight maps (with different levels) are proposed to overcome these limitations. Simulation results based on visual and quantitative analysis show the significance of proposed scheme. PMID:24695586

  14. Improving Performance During Image-Guided Procedures

    PubMed Central

    Duncan, James R.; Tabriz, David

    2015-01-01

    Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628

  15. Differentiated analysis of orthodontic tooth movement in rats with an improved rat model and three-dimensional imaging.

    PubMed

    Kirschneck, Christian; Proff, Peter; Fanghaenel, Jochen; Behr, Michael; Wahlmann, Ulrich; Roemer, Piero

    2013-12-01

    Rat models currently available for analysis of orthodontic tooth movement often lack differentiated, reliable and precise measurement systems allowing researchers to separately investigate the individual contribution of tooth tipping, body translation and root torque to overall displacement. Many previously proposed models have serious limitations such as the rather inaccurate analysis of the effects of orthodontic forces on rat incisors. We therefore developed a differentiated measurement system that was used within a rat model with the aim of overcoming the limitations of previous studies. The first left upper molar and the upper incisors of 24 male Wistar rats were subjected to a constant orthodontic force of 0.25 N by means of a NiTi closed coil spring for up to four weeks. The extent of the various types of tooth movement was measured optometrically with a CCD microscope camera and cephalometrically by means of cone beam computed tomography (CBCT). Both types of measurement proved to be reliable for consecutive measurements and the significant tooth movement induced had no harmful effects on the animals. Movement kinetics corresponded to known physiological processes and tipping and body movement equally contributed to the tooth displacement. The upper incisors of the rats were significantly deformed and their natural eruption was effectively halted. The results showed that our proposed measurement systems used within a rat model resolved most of the inadequacies of previous studies. They are reliable, precise and physiological tools for the differentiated analysis of orthodontic tooth movement while simultaneously preserving animal welfare.

  16. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  17. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  18. Helping Improve a Child's Self-Image.

    ERIC Educational Resources Information Center

    Divney, Esther P.

    This document reviews a doctoral dissertation on the self-concepts of Negro children for suggestions teachers can use in the classroom to improve the self-images of their own students. Many psychologists and educators view the attitudes and conceptions that the child has about himself as the central factor in his personality, and studies have…

  19. Brain Imaging Analysis

    PubMed Central

    BOWMAN, F. DUBOIS

    2014-01-01

    The increasing availability of brain imaging technologies has led to intense neuroscientific inquiry into the human brain. Studies often investigate brain function related to emotion, cognition, language, memory, and numerous other externally induced stimuli as well as resting-state brain function. Studies also use brain imaging in an attempt to determine the functional or structural basis for psychiatric or neurological disorders and, with respect to brain function, to further examine the responses of these disorders to treatment. Neuroimaging is a highly interdisciplinary field, and statistics plays a critical role in establishing rigorous methods to extract information and to quantify evidence for formal inferences. Neuroimaging data present numerous challenges for statistical analysis, including the vast amounts of data collected from each individual and the complex temporal and spatial dependence present. We briefly provide background on various types of neuroimaging data and analysis objectives that are commonly targeted in the field. We present a survey of existing methods targeting these objectives and identify particular areas offering opportunities for future statistical contribution. PMID:25309940

  20. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  1. Improved determination of the myelin water fraction in human brain using magnetic resonance imaging through Bayesian analysis of mcDESPOT.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2016-02-15

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in the human brain. However, even for the simplest two-pool signal model consisting of myelin-associated and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNRs), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high-dimensional nature of the mcDESPOT signal model, and, therefore the high-dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of the MWF, the Bayesian analyses introduced here use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated markedly improved accuracy and precision in the estimation of MWF

  2. Improved Determination of the Myelin Water Fraction in Human Brain using Magnetic Resonance Imaging through Bayesian Analysis of mcDESPOT

    PubMed Central

    Bouhrara, Mustapha; Spencer, Richard G.

    2015-01-01

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian

  3. DIDA - Dynamic Image Disparity Analysis.

    DTIC Science & Technology

    1982-12-31

    Understanding, Dynamic Image Analysis , Disparity Analysis, Optical Flow, Real-Time Processing ___ 20. ABSTRACT (Continue on revere side If necessary aid identify...three aspects of dynamic image analysis must be studied: effectiveness, generality, and efficiency. In addition, efforts must be made to understand the...environment. A better understanding of the need for these Limiting constraints is required. Efficiency is obviously important if dynamic image analysis is

  4. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur (Inventor); Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  5. Imaging Arrays With Improved Transmit Power Capability

    PubMed Central

    Zipparo, Michael J.; Bing, Kristin F.; Nightingale, Kathy R.

    2010-01-01

    Bonded multilayer ceramics and composites incorporating low-loss piezoceramics have been applied to arrays for ultrasound imaging to improve acoustic transmit power levels and to reduce internal heating. Commercially available hard PZT from multiple vendors has been characterized for microstructure, ability to be processed, and electroacoustic properties. Multilayers using the best materials demonstrate the tradeoffs compared with the softer PZT5-H typically used for imaging arrays. Three-layer PZT4 composites exhibit an effective dielectric constant that is three times that of single layer PZT5H, a 50% higher mechanical Q, a 30% lower acoustic impedance, and only a 10% lower coupling coefficient. Application of low-loss multilayers to linear phased and large curved arrays results in equivalent or better element performance. A 3-layer PZT4 composite array achieved the same transmit intensity at 40% lower transmit voltage and with a 35% lower face temperature increase than the PZT-5 control. Although B-mode images show similar quality, acoustic radiation force impulse (ARFI) images show increased displacement for a given drive voltage. An increased failure rate for the multilayers following extended operation indicates that further development of the bond process will be necessary. In conclusion, bonded multilayer ceramics and composites allow additional design freedom to optimize arrays and improve the overall performance for increased acoustic output while maintaining image quality. PMID:20875996

  6. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  7. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  8. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  9. Spreadsheet-Like Image Analysis

    DTIC Science & Technology

    1992-08-01

    1 " DTIC AD-A254 395 S LECTE D, ° AD-E402 350 Technical Report ARPAD-TR-92002 SPREADSHEET-LIKE IMAGE ANALYSIS Paul Willson August 1992 U.S. ARMY...August 1992 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS SPREADSHEET-LIKE IMAGE ANALYSIS 6. AUTHOR(S) Paul Willson 7. PERFORMING ORGANIZATION NAME(S) AND...14. SUBJECT TERMS 15. NUMBER OF PAGES Image analysis , nondestructive inspection, spreadsheet, Macintosh software, 14 neural network, signal processing

  10. Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1981-04-01

    UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface

  11. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  12. Evidential Reasoning in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths

  13. An improved differential box-counting method of image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Cancan; Cheng, Longfei; He, Tao; Chen, Lang; Yu, Fei; Yang, Liangen

    2016-01-01

    Fractal dimension is an important quantitative characteristic of a image, which can be widely used in image analysis. Differential box-counting method which is one of many calculation methods of a fractal dimension has been frequently used due to its simple calculation . In differential box-counting method, a window size M is limited in the integer power of 2. It leads to inaccurate calculation results of a fractal dimension. Aiming at solving the issues , in this paper, an improved algorithm is discussed that the window size M has been improved to be able to accommodate non-integer power of 2, and making the calculated fractal dimension error smaller. In order to verify superiority of the improved algorithm, the values of fractal dimension are regarded as parameters, and are applied for image segmentation combined with Ostu algorithm . Both traditional and improved differential box-counting methods are respectively used to estimate fractal dimensions and do threshold segmentation for a thread image . The experimental results show that image segmentation details by improved differential box-counting method are more obvious than that by traditional differential box-counting method, with less impurities, clearer target outline and better segmentation effect.

  14. Improvement in dynamic magnetic resonance imaging thermometry

    NASA Astrophysics Data System (ADS)

    Guo, Jun-Yu

    This dissertation is focused on improving MRI Thermometry (MRIT) techniques. The application of the spin-lattice relaxation constant is investigated in which T1 is used as indicator to measure the temperature of flowing fluid such as blood. Problems associated with this technique are evaluated, and a new method to improve the consistency and repeatability of T1 measurements is presented. The new method combines curve fitting with a measure of the curve null point to acquire more accurate and consistent T1 values. A novel method called K-space Inherited Parallel Acquisition (KIPA) is developed to achieve faster dynamic temperature measurements. Localized reconstruction coefficients are used to achieve higher reduction factors, and lower noise and artifact levels compared to that of GeneRalized Autocalibrating Partially Parallel Acquisition (GRAPPA) reconstruction. Artifacts in KIPA images are significantly reduced, and SNR is largely improved in comparison with that in GRAPPA images. The Root-Mean-Square (RMS) error of temperature for GRAPPA is 2 to 5 times larger than that for KIPA. Finally, the accuracy and comparison of the effects of motion on three parallel imaging methods: SENSE (SENSitivity Encoding), VSENSE (Variable-density SENSE) and KIPA are estimated. According to the investigation, KIPA is the most accurate and robust method among all three methods for studies with or without motion. The ratio of the normalized RMS (NRMS) error for SENSE to that for KIPA is within the range from 1 to 3.7. The ratio of the NRMS error for VSENSE to that for KIPA is about 1 to 2. These factors change with the reduction factor, motion and subject. In summary, the new strategy and method for the fast noninvasive measurement of T1 of flowing blood are proposed to improve stability and precision. The novel parallel reconstruction algorithm, KIPA, is developed to improve the temporal and spatial resolution for the PRF method. The motion effects on the KIPA method are also

  15. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  16. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version

  17. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  18. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  19. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  20. Machine learning applications in cell image analysis.

    PubMed

    Kan, Andrey

    2017-04-04

    Machine learning (ML) refers to a set of automatic pattern recognition methods that have been successfully applied across various problem domains, including biomedical image analysis. This review focuses on ML applications for image analysis in light microscopy experiments with typical tasks of segmenting and tracking individual cells, and modelling of reconstructed lineage trees. After describing a typical image analysis pipeline and highlighting challenges of automatic analysis (for example, variability in cell morphology, tracking in presence of clutters) this review gives a brief historical outlook of ML, followed by basic concepts and definitions required for understanding examples. This article then presents several example applications at various image processing stages, including the use of supervised learning methods for improving cell segmentation, and the application of active learning for tracking. The review concludes with remarks on parameter setting and future directions.Immunology and Cell Biology advance online publication, 4 April 2017; doi:10.1038/icb.2017.16.

  1. Hyperspectral image analysis. A tutorial.

    PubMed

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case.

  2. Improved Imaging Resolution in Desorption Electrospray Ionization Mass Spectrometry

    SciTech Connect

    Kertesz, Vilmos; Van Berkel, Gary J

    2008-01-01

    Imaging resolution of desorption electrospray ionization mass spectrometry (DESI-MS) was investigated using printed patterns on paper and thin-layer chromatography (TLC) plate surfaces. Resolution approaching 40 m was achieved with a typical DESI-MS setup, which is approximately 5 times better than the best resolution reported previously. This improvement was accomplished with careful control of operational parameters (particularly spray tip-to-surface distance, solvent flow rate, and spacing of lane scans). Also, an appropriately strong analyte/surface interaction and uniform surface texture on the size scale no larger that the desired imaging resolution were required to achieve this resolution. Overall, conditions providing the smallest possible effective desorption/ionization area in the DESI impact plume region and minimizing the analyte redistribution on the surface during analysis led to the improved DESI-MS imaging resolution.

  3. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  4. Image quality improvement of polygon computer generated holography.

    PubMed

    Pang, Xiao-Ning; Chen, Ding-Chen; Ding, Yi-Cong; Chen, Yi-Gui; Jiang, Shao-Ji; Dong, Jian-Wen

    2015-07-27

    Quality of holographic reconstruction image is seriously affected by undesirable messy fringes in polygon-based computer generated holography. Here, several methods have been proposed to improve the image quality, including a modified encoding method based on spatial-domain Fraunhofer diffraction and a specific LED light source. Fast Fourier transform is applied to the basic element of polygon and fringe-invisible reconstruction is achieved after introducing initial random phase. Furthermore, we find that the image with satisfactory fidelity and sharp edge can be reconstructed by either a LED with moderate coherence level or a modulator with small pixel pitch. Satisfactory image quality without obvious speckle noise is observed under the illumination of bandpass-filter-aided LED. The experimental results are consistent well with the correlation analysis on the acceptable viewing angle and the coherence length of the light source.

  5. Improved Reconstruction for MR Spectroscopic Imaging

    PubMed Central

    Maudsley, Andrew A.

    2009-01-01

    Sensitivity limitations of in vivo magnetic resonance spectroscopic imaging (MRSI) require that the extent of spatial-frequency (k-space) sampling be limited, thereby reducing spatial resolution and increasing the effects of Gibbs ringing that is associated with the use of Fourier transform reconstruction. Additional problems occur in the spectral dimension, where quantitation of individual spectral components is made more difficult by the typically low signal-to-noise ratios, variable lineshapes, and baseline distortions, particularly in areas of significant magnetic field inhomogeneity. Given the potential of in vivo MRSI measurements for a number of clinical and biomedical research applications, there is considerable interest in improving the quality of the metabolite image reconstructions. In this report, a reconstruction method is described that makes use of parametric modeling and MRI-derived tissue distribution functions to enhance the MRSI spatial reconstruction. Additional preprocessing steps are also proposed to avoid difficulties associated with image regions containing spectra of inadequate quality, which are commonly present in the in vivo MRSI data. PMID:17518063

  6. Radiologist and automated image analysis

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.

    1999-07-01

    Significant advances are being made in the area of automated medical image analysis. Part of the progress is due to the general advances being made in the types of algorithms used to process images and perform various detection and recognition tasks. A more important reason for this growth in medical image analysis processes, may be due however to a very different reason. The use of computer workstations, digital image acquisition technologies and the use of CRT monitors for display of medical images for primary diagnostic reading is becoming more prevalent in radiology departments around the world. With the advance in computer- based displays, however, has come the realization that displaying images on a CRT monitor is not the same as displaying film on a viewbox. There are perceptual, cognitive and ergonomic issues that must be considered if radiologists are to accept this change in technology and display. The bottom line is that radiologists' performance must be evaluated with these new technologies and image analysis techniques in order to verify that diagnostic performance is at least as good with these new technologies and image analysis procedures as with film-based displays. The goal of this paper is to address some of the perceptual, cognitive and ergonomic issues associated with reading radiographic images from digital displays.

  7. Improved PCA + LDA Applies to Gastric Cancer Image Classification Process

    NASA Astrophysics Data System (ADS)

    Gan, Lan; Lv, Wenya; Zhang, Xu; Meng, Xiuming

    Principal component analysis (PCA) and linear discriminant analysis (LDA) are two most widely used pattern recognition methods in the field of feature extraction,while PCA + LDA is often used in image recognition.Here,we apply PCA + LDA to gastric cancer image feature classification, but the traditional PCA + LDA dimension reduction method has good effect on the training sample dimensionality and clustering, the effect on test samples dimension reduction and clustering is very poor, that is, the traditional PCA + LDA exists Generalization problem on the test samples. To solve this problem, this paper proposes an improved PCA + LDA method, which mainly considers from the LDA transform; improves the traditional PCA + LDA;increase the generalization performance of LDA on test samples and increases the classification accuracy on test samples. The experiment proves that the method can achieve good clustering.

  8. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  9. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  10. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  11. Figure content analysis for improved biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Apostolova, Emilia; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2009-01-01

    Biomedical images are invaluable in medical education and establishing clinical diagnosis. Clinical decision support (CDS) can be improved by combining biomedical text with automatically annotated images extracted from relevant biomedical publications. In a previous study we reported 76.6% accuracy using supervised machine learning on the feasibility of automatically classifying images by combining figure captions and image content for usefulness in finding clinical evidence. Image content extraction is traditionally applied on entire images or on pre-determined image regions. Figure images articles vary greatly limiting benefit of whole image extraction beyond gross categorization for CDS due to the large variety. However, text annotations and pointers on them indicate regions of interest (ROI) that are then referenced in the caption or discussion in the article text. We have previously reported 72.02% accuracy in text and symbols localization but we failed to take advantage of the referenced image locality. In this work we combine article text analysis and figure image analysis for localizing pointer (arrows, symbols) to extract ROI pointed that can then be used to measure meaningful image content and associate it with the identified biomedical concepts for improved (text and image) content-based retrieval of biomedical articles. Biomedical concepts are identified using National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. Our methods report an average precision and recall of 92.3% and 75.3%, respectively on identifying pointing symbols in images from a randomly selected image subset made available through the ImageCLEF 2008 campaign.

  12. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  13. Analyzing and Improving Image Quality in Reflective Ghost Imaging

    DTIC Science & Technology

    2011-02-01

    imaging." Phys. Rev. A 79, 023833 (2009). [7] R . E . Meyers , K. S. Deacon. and Y. Shih, "Ghost-imaging experiment by measuring reflected photons," Phys...Rev. A 77, 041801 (2008). [8] R . E . Meyers and K. S. Deacon, "Quantum ghost imaging experiments at ARL," Proc. SPIE 7815. 781501 (2010). [9] J. H

  14. Multi-Source Image Analysis.

    DTIC Science & Technology

    1979-12-01

    three sensor systems, but at some test sites only one or two types were utilized. Sensor characteristics were evaluated in relationship to the targets...Multi-source image analysis is an evaluation of remote sensor imagery for military geographic information. The imagery is confined to radar, thermal...heating affect a TIR scanner’s recorded temperature, careful image evaluation can be used to extract valuable military geographic information

  15. Image Analysis and Modeling

    DTIC Science & Technology

    1975-08-01

    bands *hlch maximizes the mutual Information between the remaining bands. In order to compute the mutual Information between spectral bands, the...University Sponsored by Defense Advsnced Research Projects Agency ARPA Order No. 2893 oJ^ Approved for public relesse; distribution unlimited...Improved versions of BLOB, and In the meantime looking into techniques which would give results that are Independent of the order of processing. One

  16. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  17. Multivariate image analysis in biomedicine.

    PubMed

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  18. Dynamic and still microcirculatory image analysis for quantitative microcirculation research

    NASA Astrophysics Data System (ADS)

    Ying, Xiaoyou; Xiu, Rui-juan

    1994-05-01

    Based on analyses of various types of digital microcirculatory image (DMCI), we summed up the image features of DMCI, the digitizing demands for digital microcirculatory imaging, and the basic characteristics of the DMCI processing. A dynamic and still imaging separation processing (DSISP) mode was designed for developing a DMCI workstation and the DMCI processing. Original images in this study were clinical microcirculatory images from human finger nail-bed and conjunctiva microvasculature, and intravital microvascular network images from animal tissue or organs. A series of dynamic and still microcirculatory image analysis functions were developed in this study. The experimental results indicate most of the established analog video image analysis methods for microcirculatory measurement could be realized in a more flexible way based on the DMCI. More information can be rapidly extracted from the quality improved DMCI by employing intelligence digital image analysis methods. The DSISP mode is very suitable for building a DMCI workstation.

  19. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  20. Deep Learning in Medical Image Analysis.

    PubMed

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-03-09

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  1. Improving resolution of optical coherence tomography for imaging of microstructures

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Lu, Hui; Wang, James H.; Wang, Michael R.

    2015-03-01

    Multi-frame superresolution technique has been used to improve the lateral resolution of spectral domain optical coherence tomography (SD-OCT) for imaging of 3D microstructures. By adjusting the voltages applied to ? and ? galvanometer scanners in the measurement arm, small lateral imaging positional shifts have been introduced among different C-scans. Utilizing the extracted ?-? plane en face image frames from these specially offset C-scan image sets at the same axial position, we have reconstructed the lateral high resolution image by the efficient multi-frame superresolution technique. To further improve the image quality, we applied the latest K-SVD and bilateral total variation denoising algorithms to the raw SD-OCT lateral images before and along with the superresolution processing, respectively. The performance of the SD-OCT of improved lateral resolution is demonstrated by 3D imaging a microstructure fabricated by photolithography and a double-layer microfluidic device.

  2. Method for improving visualization of infrared images

    NASA Astrophysics Data System (ADS)

    Cimbalista, Mario

    2014-05-01

    Thermography has an extremely important difference from the other visual image converting electronic systems, like XRays or ultrasound: the infrared camera operator usually spend hour after hour with his/her eyes looking only at infrared images, sometimes several intermittent hours a day if not six or more continuous hours. This operational characteristic has a very important impact on yield, precision, errors and misinterpretation of the infrared images contents. Despite a great hardware development over the last fifty years, quality infrared thermography still lacks for a solution for these problems. The human eye physiology has not evolved to see infrared radiation neither the mind-brain has the capability to understand and decode infrared information. Chemical processes inside the human eye and functional cells distributions as well as cognitive-perceptual impact of images plays a crucial role in the perception, detection, and other steps of dealing with infrared images. The system presented here, called ThermoScala and patented in USA solves this problem using a coding process applicable to an original infrared image, generated from any value matrix, from any kind of infrared camera to make it much more suitable for human usage, causing a substantial difference in the way the retina and the brain processes the resultant images. The result obtained is a much less exhaustive way to see, identify and interpret infrared images generated by any infrared camera that uses this conversion process.

  3. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  4. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  5. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  6. A Unified Mathematical Approach to Image Analysis.

    DTIC Science & Technology

    1987-08-31

    describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .

  7. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    PubMed Central

    Zhu, Ganzheng; Li, Siqi; Gong, Shang; Yang, Benqiang; Zhang, Libo

    2016-01-01

    Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR) pathological image enhancement method based on improved bias field correction and guided image filter (GIF). Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E) stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR) image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work. PMID:28116303

  8. Institutional Image: How to Define, Improve, Market It.

    ERIC Educational Resources Information Center

    Topor, Robert S.

    Advice for colleges on how to identify, develop, and communicate a positive image for the institution is offered in this handbook. The use of market research techniques to measure image is discussed along with advice on how to improve an image so that it contributes to a unified marketing plan. The first objective is to create and communicate some…

  9. Improving image segmentation by learning region affinities

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  10. An improved adaptive IHS method for image fusion

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    2015-12-01

    An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.

  11. Improving night sky star image processing algorithm for star sensors.

    PubMed

    Arbabmir, Mohammad Vali; Mohammadi, Seyyed Mohammad; Salahshour, Sadegh; Somayehee, Farshad

    2014-04-01

    In this paper, the night sky star image processing algorithm, consisting of image preprocessing, star pattern recognition, and centroiding steps, is improved. It is shown that the proposed noise reduction approach can preserve more necessary information than other frequently used approaches. It is also shown that the proposed thresholding method unlike commonly used techniques can properly perform image binarization, especially in images with uneven illumination. Moreover, the higher performance rate and lower average centroiding estimation error of near 0.045 for 400 simulated images compared to other algorithms show the high capability of the proposed night sky star image processing algorithm.

  12. Reconstruction algorithm for improved ultrasound image quality.

    PubMed

    Madore, Bruno; Meral, F Can

    2012-02-01

    A new algorithm is proposed for reconstructing raw RF data into ultrasound images. Previous delay-and-sum beamforming reconstruction algorithms are essentially one-dimensional, because a sum is performed across all receiving elements. In contrast, the present approach is two-dimensional, potentially allowing any time point from any receiving element to contribute to any pixel location. Computer-intensive matrix inversions are performed once, in advance, to create a reconstruction matrix that can be reused indefinitely for a given probe and imaging geometry. Individual images are generated through a single matrix multiplication with the raw RF data, without any need for separate envelope detection or gridding steps. Raw RF data sets were acquired using a commercially available digital ultrasound engine for three imaging geometries: a 64-element array with a rectangular field-of- view (FOV), the same probe with a sector-shaped FOV, and a 128-element array with rectangular FOV. The acquired data were reconstructed using our proposed method and a delay- and-sum beamforming algorithm for comparison purposes. Point spread function (PSF) measurements from metal wires in a water bath showed that the proposed method was able to reduce the size of the PSF and its spatial integral by about 20 to 38%. Images from a commercially available quality-assurance phantom had greater spatial resolution and contrast when reconstructed with the proposed approach.

  13. Image analysis in medical imaging: recent advances in selected examples.

    PubMed

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments.

  14. Improved microgrid arrangement for integrated imaging polarimeters.

    PubMed

    LeMaster, Daniel A; Hirakawa, Keigo

    2014-04-01

    For almost 20 years, microgrid polarimetric imaging systems have been built using a 2×2 repeating pattern of polarization analyzers. In this Letter, we show that superior spatial resolution is achieved over this 2×2 case when the analyzers are arranged in a 2×4 repeating pattern. This unconventional result, in which a more distributed sampling pattern results in finer spatial resolution, is also achieved without affecting the conditioning of the polarimetric data-reduction matrix. Proof is provided theoretically and through Stokes image reconstruction of synthesized data.

  15. A performance analysis system for MEMS using automated imaging methods

    SciTech Connect

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  16. Imaging analysis of LDEF craters

    NASA Technical Reports Server (NTRS)

    Radicatidibrozolo, F.; Harris, D. W.; Chakel, J. A.; Fleming, R. H.; Bunch, T. E.

    1991-01-01

    Two small craters in Al from the Long Duration Exposure Facility (LDEF) experiment tray A11E00F (no. 74, 119 micron diameter and no. 31, 158 micron diameter) were analyzed using Auger electron spectroscopy (AES), time-of-flight secondary ion mass spectroscopy (TOF-SIMS), low voltage scanning electron microscopy (LVSEM), and SEM energy dispersive spectroscopy (EDS). High resolution images and sensitive elemental and molecular analysis were obtained with this combined approach. The result of these analyses are presented.

  17. Improved deep two-photon calcium imaging in vivo.

    PubMed

    Birkner, Antje; Tischbirek, Carsten H; Konnerth, Arthur

    2016-12-21

    Two-photon laser scanning calcium imaging has emerged as a useful method for the exploration of neural function and structure at the cellular and subcellular level in vivo. The applications range from imaging of subcellular compartments such as dendrites, spines and axonal boutons up to the functional analysis of large neuronal or glial populations. However, the depth penetration is often limited to a few hundred micrometers, corresponding, for example, to the upper cortical layers of the mouse brain. Light scattering and aberrations originating from refractive index inhomogeneties of the tissue are the reasons for these limitations. The depth penetration of two-photon imaging can be enhanced through various approaches, such as the implementation of adaptive optics, the use of three-photon excitation and/or labeling cells with red-shifted genetically encoded fluorescent sensors. However, most of the approaches used so far require the implementation of new instrumentation and/or time consuming staining protocols. Here we present a simple approach that can be readily implemented in combination with standard two-photon microscopes. The method involves an optimized protocol for depth-restricted labeling with the red-shifted fluorescent calcium indicator Cal-590 and benefits from the use of ultra-short laser pulses. The approach allows in vivo functional imaging of neuronal populations with single cell resolution in all six layers of the mouse cortex. We demonstrate that stable recordings in deep cortical layers are not restricted to anesthetized animals but are well feasible in awake, behaving mice. We anticipate that the improved depth penetration will be beneficial for two-photon functional imaging in larger species, such as non-human primates.

  18. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  19. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework.

    PubMed

    Guo, Qing; Dong, Fangmin; Sun, Shuifa; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images.

  20. Improved image quality for x-ray CT imaging of gel dosimeters

    SciTech Connect

    Kakakhel, M. B.; Kairn, T.; Kenny, J.; Trapp, J. V.

    2011-09-15

    Purpose: This study provides a simple method for improving precision of x-ray computed tomography (CT) scans of irradiated polymer gel dosimetry. The noise affecting CT scans of irradiated gels has been an impediment to the use of clinical CT scanners for gel dosimetry studies. Methods: In this study, it is shown that multiple scans of a single PAGAT gel dosimeter can be used to extrapolate a ''zero-scan'' image which displays a similar level of precision to an image obtained by averaging multiple CT images, without the compromised dose measurement resulting from the exposure of the gel to radiation from the CT scanner. Results: When extrapolating the zero-scan image, it is shown that exponential and simple linear fits to the relationship between Hounsfield unit and scan number, for each pixel in the image, provide an accurate indication of gel density. Conclusions: It is expected that this work will be utilized in the analysis of three-dimensional gel volumes irradiated using complex radiotherapy treatments.

  1. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  2. Digital subtraction angiography: principles and pitfalls of image improvement techniques.

    PubMed

    Levin, D C; Schapiro, R M; Boxt, L M; Dunham, L; Harrington, D P; Ergun, D L

    1984-09-01

    The technology of imaging methods in digital subtraction angiography (DSA) is discussed in detail. Areas covered include function of the video camera in both interlaced and sequential scan modes, digitization by the analog-to-digital converter, logarithmic signal processing, dose rates, and acquisition of images using frame integration and pulsed-sequential techniques. Also discussed are various methods of improving image content and quality by both hardware and software modifications. These include the development of larger image intensifiers, larger matrices, video camera improvements, reregistration, hybrid subtraction, matched filtering, recursive filtering, DSA tomography, and edge enhancement.

  3. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  4. Improved Image-Guided Laparoscopic Prostatectomy

    DTIC Science & Technology

    2013-07-01

    A.2 Materials and Methods Figure A.1: Ultrasound elastography data collection process using the sextant ap- proach; RF data was acquired in axial...were performed in a systematic sextant approach, similar to that used for image guided biopsies. RF data was acquired in axial planes (from gland’s...base, through mid gland, to apex) on the left and right side of the gland (Figure A.1). The sextant approach was necessary to ensure that the scans

  5. Imaging system design for improved information capacity

    NASA Technical Reports Server (NTRS)

    Fales, C. L.; Huck, F. O.; Samms, R. W.

    1984-01-01

    Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.

  6. Improved Image-Guided Laparoscopic Prostatectomy

    DTIC Science & Technology

    2011-08-01

    pancreas, lymph nodes, and thyroid .[1,2,3,4,5] Ultrasound elastography is thus a very promising image guidance method for robot-assisted procedures...Li, Z., Zhang, X., Chen, M., and Luo, Z., “Real-time ultrasound elastography in the differential diagnosis of benign and malignant thyroid nodules...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Laparoscopic Ultrasound

  7. Quantitative image analysis of celiac disease.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-03-07

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients.

  8. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  9. RAMAN spectroscopy imaging improves the diagnosis of papillary thyroid carcinoma

    NASA Astrophysics Data System (ADS)

    Rau, Julietta V.; Graziani, Valerio; Fosca, Marco; Taffon, Chiara; Rocchia, Massimiliano; Crucitti, Pierfilippo; Pozzilli, Paolo; Onetti Muda, Andrea; Caricato, Marco; Crescenzi, Anna

    2016-10-01

    Recent investigations strongly suggest that Raman spectroscopy (RS) can be used as a clinical tool in cancer diagnosis to improve diagnostic accuracy. In this study, we evaluated the efficiency of Raman imaging microscopy to discriminate between healthy and neoplastic thyroid tissue, by analyzing main variants of Papillary Thyroid Carcinoma (PTC), the most common type of thyroid cancer. We performed Raman imaging of large tissue areas (from 100 × 100 μm2 up to 1 × 1 mm2), collecting 38 maps containing about 9000 Raman spectra. Multivariate statistical methods, including Linear Discriminant Analysis (LDA), were applied to translate Raman spectra differences between healthy and PTC tissues into diagnostically useful information for a reliable tissue classification. Our study is the first demonstration of specific biochemical features of the PTC profile, characterized by significant presence of carotenoids with respect to the healthy tissue. Moreover, this is the first evidence of Raman spectra differentiation between classical and follicular variant of PTC, discriminated by LDA with high efficiency. The combined histological and Raman microscopy analyses allow clear-cut integration of morphological and biochemical observations, with dramatic improvement of efficiency and reliability in the differential diagnosis of neoplastic thyroid nodules, paving the way to integrative findings for tumorigenesis and novel therapeutic strategies.

  10. RAMAN spectroscopy imaging improves the diagnosis of papillary thyroid carcinoma

    PubMed Central

    Rau, Julietta V.; Graziani, Valerio; Fosca, Marco; Taffon, Chiara; Rocchia, Massimiliano; Crucitti, Pierfilippo; Pozzilli, Paolo; Onetti Muda, Andrea; Caricato, Marco; Crescenzi, Anna

    2016-01-01

    Recent investigations strongly suggest that Raman spectroscopy (RS) can be used as a clinical tool in cancer diagnosis to improve diagnostic accuracy. In this study, we evaluated the efficiency of Raman imaging microscopy to discriminate between healthy and neoplastic thyroid tissue, by analyzing main variants of Papillary Thyroid Carcinoma (PTC), the most common type of thyroid cancer. We performed Raman imaging of large tissue areas (from 100 × 100 μm2 up to 1 × 1 mm2), collecting 38 maps containing about 9000 Raman spectra. Multivariate statistical methods, including Linear Discriminant Analysis (LDA), were applied to translate Raman spectra differences between healthy and PTC tissues into diagnostically useful information for a reliable tissue classification. Our study is the first demonstration of specific biochemical features of the PTC profile, characterized by significant presence of carotenoids with respect to the healthy tissue. Moreover, this is the first evidence of Raman spectra differentiation between classical and follicular variant of PTC, discriminated by LDA with high efficiency. The combined histological and Raman microscopy analyses allow clear-cut integration of morphological and biochemical observations, with dramatic improvement of efficiency and reliability in the differential diagnosis of neoplastic thyroid nodules, paving the way to integrative findings for tumorigenesis and novel therapeutic strategies. PMID:27725756

  11. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  12. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  13. An improvement to the diffraction-enhanced imaging method that permits imaging of dynamic systems

    NASA Astrophysics Data System (ADS)

    Siu, K. K. W.; Kitchen, M. J.; Pavlov, K. M.; Gillam, J. E.; Lewis, R. A.; Uesugi, K.; Yagi, N.

    2005-08-01

    We present an improvement to the diffraction-enhanced imaging (DEI) method that permits imaging of moving samples or other dynamic systems in real time. The method relies on the use of a thin Bragg analyzer crystal and simultaneous acquisition of the multiple images necessary for the DEI reconstruction of the apparent absorption and refraction images. These images are conventionally acquired at multiple points on the reflectivity curve of an analyzer crystal which presents technical challenges and precludes imaging of moving subjects. We have demonstrated the potential of the technique by taking DEI "movies" of an artificially moving mouse leg joint, acquired at the Biomedical Imaging Centre at SPring-8, Japan.

  14. Improvement of fringe quality at LDA measuring volume using compact two hololens imaging system

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.

    2015-03-01

    Design, analysis and construction of an LDA optical setup using conventional as well as compact two hololens imaging system have been performed. Fringes formed at measurement volume by both the imaging systems have been recorded. After experimentally analyzing these fringes, it is found that fringes obtained using compact two hololens imaging system get improved both qualitatively and quantitatively compared to that obtained using conventional imaging system. Hence it is concluded that use of the compact two hololens imaging system for making LDA optical setup is a better choice over the conventional one.

  15. Vibration signature analysis of AFM images

    SciTech Connect

    Joshi, G.A.; Fu, J.; Pandit, S.M.

    1995-12-31

    Vibration signature analysis has been commonly used for the machine condition monitoring and the control of errors. However, it has been rarely employed for the analysis of the precision instruments such as an atomic force microscope (AFM). In this work, an AFM was used to collect vibration data from a sample positioning stage under different suspension and support conditions. Certain structural characteristics of the sample positioning stage show up as a result of the vibration signature analysis of the surface height images measured using an AFM. It is important to understand these vibration characteristics in order to reduce vibrational uncertainty, improve the damping and structural design, and to eliminate the imaging imperfections. The choice of method applied for vibration analysis may affect the results. Two methods, the data dependent systems (DDS) analysis and the Welch`s periodogram averaging method were investigated for application to this problem. Both techniques provide smooth spectrum plots from the data. Welch`s periodogram provides a coarse resolution as limited by the number of samples and requires a choice of window to be decided subjectively by the user. The DDS analysis provides sharper spectral peaks at a much higher resolution and a much lower noise floor. A decomposition of the signal variance in terms of the frequencies is provided as well. The technique is based on an objective model adequacy criterion.

  16. Using photoconductivity to improve image-tube gating speeds

    NASA Astrophysics Data System (ADS)

    Gobby, P. L.; Yates, G. J.; Jaramillo, S. A.; Noel, B. W.; Aeby, I.

    1983-08-01

    A technique using the photoconducting of semiconductor and insulator photocathodes to improve image tube gating speeds is presented. A simple model applicable to the technique and a preliminary experiment are described.

  17. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  18. Multispectral Image Analysis of Hurricane Gilbert

    DTIC Science & Technology

    1989-05-19

    Classification) Multispectral Image Analysis of Hurrican Gilbert (unclassified) 12. PERSONAL AUTHOR(S) Kleespies, Thomas J. (GL/LYS) 13a. TYPE OF REPORT...cloud top height. component, of tle image in the red channel, and similarly for the green and blue channels. Multispectral Muti.pectral image analysis can...However, there seems to be few references to the human range of vision, the selection as to which mllti.pp.tral image analysis of scenes or

  19. Frequency Diversity for Improving Synthetic Aperture Radar Imaging

    DTIC Science & Technology

    2009-03-01

    xiii List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi I. Introduction ...Improving Synthetic Aperture Radar Imaging I. Introduction 1.1 Research Motivation Synthetic aperture radar (SAR) is an active radio frequency (RF) imaging...PFA) begins with introduction of the linear frequency modulated (LFM) waveform. LFM is the most common waveform used in general radar applications [25

  20. An improved Chebyshev distance metric for clustering medical images

    NASA Astrophysics Data System (ADS)

    Mousa, Aseel; Yusof, Yuhanis

    2015-12-01

    A metric or distance function is a function which defines a distance between elements of a set. In clustering, measuring the similarity between objects has become an important issue. In practice, there are various similarity measures used and this includes the Euclidean, Manhattan and Minkowski. In this paper, an improved Chebyshev similarity measure is introduced to replace existing metrics (such as Euclidean and standard Chebyshev) in clustering analysis. The proposed measure is later realized in analyzing blood cancer images. Results demonstrate that the proposed measure produces the smallest objective function value and converge at the lowest number of iteration. Hence, it can be concluded that the proposed distance metric contribute in producing better clusters.

  1. Improved Dot Diffusion For Image Halftoning

    DTIC Science & Technology

    1999-01-01

    The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is

  2. Photoacoustic imaging with rotational compounding for improved signal detection

    NASA Astrophysics Data System (ADS)

    Forbrich, A.; Heinmiller, A.; Jose, J.; Needles, A.; Hirson, D.

    2015-03-01

    Photoacoustic microscopy with linear array transducers enables fast two-dimensional, cross-sectional photoacoustic imaging. Unfortunately, most ultrasound transducers are only sensitive to a very narrow angular acceptance range and preferentially detect signals along the main axis of the transducer. This often limits photoacoustic microscopy from detecting blood vessels which can extend in any direction. Rotational compounded photoacoustic imaging is introduced to overcome the angular-dependency of detecting acoustic signals with linear array transducers. An integrate system is designed to control the image acquisition using a linear array transducer, a motorized rotational stage, and a motorized lateral stage. Images acquired at multiple angular positions are combined to form a rotational compounded image. We found that the signal-to-noise ratio improved, while the sidelobe and reverberation artifacts were substantially reduced. Furthermore, the rotational compounded images of excised kidneys and hindlimb tumors of mice showed more structural information compared with any single image collected.

  3. An improved FCM medical image segmentation algorithm based on MMTD.

    PubMed

    Zhou, Ningning; Yang, Tingting; Zhang, Shaobai

    2014-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD) and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  4. High resolution PET breast imager with improved detection efficiency

    DOEpatents

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  5. Improved color interpolation method based on Bayer image

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2012-10-01

    Image sensors are important components of lunar exploration device. Considering volume and cost, image sensors generally adopt a single CCD or CMOS at the present time, and the surface of the sensor is covered with a layer of color filter array(CFA), which is usually Bayer CFA. In the Bayer CFA, each pixel can only get one of tricolor, so it is necessary to carry out color interpolation in order to get the full color image. An improved Bayer image interpolation method is presented, which is novel, practical, and also easy to be realized. The results of experiments to prove the effect of the interpolation are shown. Comparing with classic methods, this method can find edge of image more accurately, reduce the saw tooth phenomenon in the edge area, and keep the image smooth in other area. This method is applied successfully in a certain exploration imaging system.

  6. Restriction spectrum imaging improves MRI-based prostate cancer detection

    PubMed Central

    McCammack, Kevin C.; Schenker-Ahmed, Natalie M.; White, Nathan S.; Best, Shaun R.; Marks, Robert M.; Heimbigner, Jared; Kane, Christopher J.; Parsons, J. Kellogg; Kuperman, Joshua M.; Bartsch, Hauke; Desikan, Rahul S.; Rakow-Penner, Rebecca A.; Liss, Michael A.; Margolis, Daniel J. A.; Raman, Steven S.; Shabaik, Ahmed; Dale, Anders M.; Karow, David S.

    2017-01-01

    Purpose To compare the diagnostic performance of restriction spectrum imaging (RSI), with that of conventional multi-parametric (MP) magnetic resonance imaging (MRI) for prostate cancer (PCa) detection in a blinded reader-based format. Methods Three readers independently evaluated 100 patients (67 with proven PCa) who underwent MP-MRI and RSI within 6 months of systematic biopsy (N = 67; 23 with targeting performed) or prostatectomy (N = 33). Imaging was performed at 3 Tesla using a phased-array coil. Readers used a five-point scale estimating the likelihood of PCa present in each prostate sextant. Evaluation was performed in two separate sessions, first using conventional MP-MRI alone then immediately with MP-MRI and RSI in the same session. Four weeks later, another scoring session used RSI and T2-weighted imaging (T2WI) without conventional diffusion-weighted or dynamic contrast-enhanced imaging. Reader interpretations were then compared to prostatectomy data or biopsy results. Receiver operating characteristic curves were performed, with area under the curve (AUC) used to compare across groups. Results MP-MRI with RSI achieved higher AUCs compared to MP-MRI alone for identifying high-grade (Gleason score greater than or equal to 4 + 3=7) PCa (0.78 vs. 0.70 at the sextant level; P < 0.001 and 0.85 vs. 0.79 at the hemigland level; P = 0.04). RSI and T2WI alone achieved AUCs similar to MP-MRI for high-grade PCa (0.71 vs. 0.70 at the sextant level). With hemigland analysis, high-grade disease results were similar when comparing RSI + T2WI with MP-MRI, although with greater AUCs compared to the sextant analysis (0.80 vs. 0.79). Conclusion Including RSI with MP-MRI improves PCa detection compared to MP-MRI alone, and RSI with T2WI achieves similar PCa detection as MP-MRI. PMID:26910114

  7. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  8. Multimodal medical image fusion using improved multi-channel PCNN.

    PubMed

    Zhao, Yaqian; Zhao, Qinping; Hao, Aimin

    2014-01-01

    Multimodal medical image fusion is a method of integrating information from multiple image formats. Its aim is to provide useful and accurate information for doctors. Multi-channel pulse coupled neural network (m-PCNN) is a recently proposed fusion model. Compared with previous methods, this network can effectively manage various types of medical images. However, it has two drawbacks: lack of control to feed function and low-level automation. The improved multi-channel PCNN proposed in this paper can adjust the impact of feed function by linking strength and adaptively compute the weighting coefficients for each pixel. Experimental results demonstrated the effectiveness of the improved m-PCNN fusion model.

  9. An improved coding technique for image encryption and key management

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Ma, Jie; Hu, Jiasheng

    2005-02-01

    An improved chaotic algorithm for image encryption on the basis of conventional chaotic encryption algorithm is proposed. Two keys are presented in our technique. One is called private key, which is fixed and protected in the system. The other is named assistant key, which is public and transferred with the encrypted image together. For different original image, different assistant key should be chosen so that one could get different encrypted key. The updated encryption algorithm not only can resist a known-plaintext attack, but also offers an effective solution for key management. The analyses and the computer simulations show that the security is improved greatly, and can be easily realized with hardware.

  10. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  11. Dual-wavelength retinal images denoising algorithm for improving the accuracy of oxygen saturation calculation

    NASA Astrophysics Data System (ADS)

    Xian, Yong-Li; Dai, Yun; Gao, Chun-Ming; Du, Rui

    2017-01-01

    Noninvasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and spectral absorption characteristics of tissue. Retinal images at 570 and 600 nm are simultaneously captured by dual-wavelength retinal oximetry based on fundus camera. SO2 is finally measured after vessel segmentation, image registration, and calculation of optical density ratio of two images. However, image noise can dramatically affect subsequent image processing and SO2 calculation accuracy. The aforementioned problem remains to be addressed. The purpose of this study was to improve image quality and SO2 calculation accuracy by noise analysis and denoising algorithm for dual-wavelength images. First, noise parameters were estimated by mixed Poisson-Gaussian (MPG) noise model. Second, an MPG denoising algorithm which we called variance stabilizing transform (VST) + dual-domain image denoising (DDID) was proposed based on VST and improved dual-domain filter. The results show that VST + DDID is able to effectively remove MPG noise and preserve image edge details. VST + DDID is better than VST + block-matching and three-dimensional filtering, especially in preserving low-contrast details. The following simulation and analysis indicate that MPG noise in the retinal images can lead to erroneously low measurement for SO2, and the denoised images can provide more accurate grayscale values for retinal oximetry.

  12. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  13. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original

  14. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  15. Restoration Of MEX SRC Images For Improved Topography: A New Image Product

    NASA Astrophysics Data System (ADS)

    Duxbury, T. C.

    2012-12-01

    Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived

  16. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  17. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  18. Image analysis: a consumer's guide.

    PubMed

    Meyer, F

    1983-01-01

    The last years have seen an explosion of systems in image analysis. It is hard for the pathologist or the cytologist to make the right choice of equipment. All machines are stupid, and the only valuable thing is the human work put into it. So make your benefit of the work other people have done for you. Chose a method largely used on many systems and which has proved fertile in many domains and not only for your specific to day's application: Mathematical Morphology, to which are to be added the linear convolutions present on all machines is a strong candidate for becoming such a method. The paper illustrates a working day of an ideal system: research and diagnostic directed work during the working hours, automatic screening of cervical (or other) smears during night.

  19. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  20. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  1. Improvements in interpretation of posterior capsular opacification (PCO) images

    NASA Astrophysics Data System (ADS)

    Paplinski, Andrew P.; Boyce, James F.; Barman, Sarah A.

    2000-06-01

    We present further improvements to the methods of interpretation of the Posterior Capsular Opacification (PCO) images. These retro-illumination images of the back surface of the implanted lens are used to monitor the state of patient's vision after cataract operation. A common post-surgical complication is opacification of the posterior eye capsule caused by the growth of epithelial cells across the back surface of the capsule. Interpretation of the PCO images is based on their segmentation into transparent image areas and opaque areas, which are affected by the growth of epithelial cells and can be characterized by the increase in the image local variance. This assumption is valid in majority of cases. However, for different materials used for the implanted lenses it sometimes happens that the epithelial cells grow in a way characterized by low variance. In such a case segmentation gives a relatively big error. We describe an application of an anisotropic diffusion equation in a non-linear pre-processing of PCO images. The algorithm preserves the high-variance areas of PCO images and performs a low-pass filtering of small low- variance features. The algorithm maintains a mean value of the variance and guarantees existence of a stable solution and improves segmentation of the PCO images.

  2. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  3. Improving Self-Image of Students. ACSA School Management Digest, Series 1, Number 14. ERIC/CEM Research Analysis Series, Number 41.

    ERIC Educational Resources Information Center

    Mazzarella, Jo Ann

    Research over the last ten years provides overwhelming evidence that the most successful students have strong positive self-concepts. This booklet reviews literature on self-concept and describes many programs designed to improve student self-esteem. The paper begins by noting that although no one understands the order of the cause and effect…

  4. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  5. Naval Signal and Image Analysis Conference Report

    DTIC Science & Technology

    1998-02-26

    Arlington Hilton Hotel in Arlington, Virginia. The meeting was by invitation only and consisted of investigators in the ONR Signal and Image Analysis Program...in signal and image analysis . The conference provided an opportunity for technical interaction between academic researchers and Naval scientists and...plan future directions for the ONR Signal and Image Analysis Program as well as informal recommendations to the Program Officer.

  6. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  7. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  8. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  9. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.

    PubMed

    Mastanduno, Michael A; Gambhir, Sanjiv S

    2016-10-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.

  10. Improved target detection by IR dual-band image fusion

    NASA Astrophysics Data System (ADS)

    Adomeit, U.; Ebert, R.

    2009-09-01

    Dual-band thermal imagers acquire information simultaneously in both the 8-12 μm (long-wave infrared, LWIR) and the 3-5 μm (mid-wave infrared, MWIR) spectral range. Compared to single-band thermal imagers they are expected to have several advantages in military applications. These advantages include the opportunity to use the best band for given atmospheric conditions (e. g. cold climate: LWIR, hot and humid climate: MWIR), the potential to better detect camouflaged targets and an improved discrimination between targets and decoys. Most of these advantages have not yet been verified and/or quantified. It is expected that image fusion allows better exploitation of the information content available with dual-band imagers especially with respect to detection of targets. We have developed a method for dual-band image fusion based on the apparent temperature differences in the two bands. This method showed promising results in laboratory tests. In order to evaluate its performance under operational conditions we conducted a field trial in an area with high thermal clutter. In such areas, targets are hardly to detect in single-band images because they vanish in the clutter structure. The image data collected in this field trial was used for a perception experiment. This perception experiment showed an enhanced target detection range and reduced false alarm rate for the fused images compared to the single-band images.

  11. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures

    PubMed Central

    Mastanduno, Michael A.; Gambhir, Sanjiv S.

    2016-01-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies. PMID:27867695

  12. Correlative feature analysis of FFDM images

    NASA Astrophysics Data System (ADS)

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-03-01

    Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)

  13. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  14. Microscopy image segmentation tool: robust image data analysis.

    PubMed

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  15. Encrypting 2D/3D image using improved lensless integral imaging in Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Wang, Qiong-Hua; Kim, Seok-Tae; Lee, In-Kwon

    2016-12-01

    We propose a new image encryption technique, for the first time to our knowledge, combined Fresnel transform with the improved lensless integral imaging technique. In this work, before image encryption, the input image is first recorded into an elemental image array (EIA) by using the improved lensless integral imaging technique. The recorded EIA is encrypted into random noise by use of two phase masks located in the Fresnel domain. The positions of phase masks and operation wavelength, as well as the integral imaging system parameters are used as encryption keys that can ensure security. Compared with previous works, the main novelty of this proposed method resides in the fact that the elemental images possess distributed memory characteristic, which greatly improved the robustness of the image encryption algorithm. Meanwhile, the proposed pixel averaging algorithm can effectively address the overlapping problem existing in the computational integral imaging reconstruction process. Numerical simulations are presented to demonstrate the feasibility and effectiveness of the proposed method. Results also indicate the high robustness against data loss attacks.

  16. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  17. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  18. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  19. Image processing software for imaging spectrometry data analysis

    NASA Astrophysics Data System (ADS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-02-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  20. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  1. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  2. Hyperspectral image classification using functional data analysis.

    PubMed

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing

    2014-09-01

    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  3. A dual-view digital tomosynthesis imaging technique for improved chest imaging

    SciTech Connect

    Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C.

    2015-09-15

    Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of the reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD) values

  4. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    PubMed Central

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-01-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck. PMID:27021133

  5. A watershed approach for improving medical image segmentation.

    PubMed

    Zanaty, E A; Afifi, Ashraf

    2013-01-01

    In this paper, a novel watershed approach based on seed region growing and image entropy is presented which could improve the medical image segmentation. The proposed algorithm enables the prior information of seed region growing and image entropy in its calculation. The algorithm starts by partitioning the image into several levels of intensity using watershed multi-degree immersion process. The levels of intensity are the input to a computationally efficient seed region segmentation process which produces the initial partitioning of the image regions. These regions are fed to entropy procedure to carry out a suitable merging which produces the final segmentation. The latter process uses a region-based similarity representation of the image regions to decide whether regions can be merged. The region is isolated from the level and the residual pixels are uploaded to the next level and so on, we recall this process as multi-level process and the watershed is called multi-level watershed. The proposed algorithm is applied to challenging applications: grey matter-white matter segmentation in magnetic resonance images (MRIs). The established methods and the proposed approach are experimented by these applications to a variety of simulating immersion, multi-degree, multi-level seed region growing and multi-level seed region growing with entropy. It is shown that the proposed method achieves more accurate results for medical image oversegmentation.

  6. Improvements to Color HRSC+OMEGA Image Mosaics of Mars

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Audouard, J.; Dumke, A.; Dunker, T.; Gross, C.; Kneissl, T.; Michael, G.; Ody, A.; Poulet, F.; Schreiner, B.; van Gasselt, S.; Walter, S. H. G.; Wendt, L.; Zuschneid, W.

    2015-10-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express (MEx) orbiter has acquired 3640 images (with 'preliminary level 4' processing as described in [1]) of the Martian surface since arriving in orbit in 2003, covering over 90% of the planet [2]. At resolutions that can reach 10 meters/pixel, these MEx/HRSC images [3-4] are constructed in a pushbroom manner from 9 different CCD line sensors, including a panchromatic nadir-looking (Pan) channel, 4 color channels (R, G, B, IR), and 4 other panchromatic channels for stereo imaging or photometric imaging. In [5], we discussed our first approach towards mosaicking hundreds of the MEx/HRSC RGB or Pan images together. The images were acquired under different atmospheric conditions over the entire mission and under different observation/illumination geometries. Therefore, the main challenge that we have addressed is the color (or gray-scale) matching of these images, which have varying colors (or gray scales) due to the different observing conditions. Using this first approach, our best results for a semiglobal mosaic consist of adding a high-pass-filtered version of the HRSC mosaic to a low-pass-filtered version of the MEx/OMEGA [6] global mosaic. Herein, we will present our latest results using a new, improved, second approach for mosaicking MEx/HRSC images [7], but focusing on the RGB Color processing when using this new second approach. Currently, when the new second approach is applied to Pan images, we match local spatial averages of the Pan images to the local spatial averages of a mosaic made from the images acquired by the Mars Global Surveyor TES bolometer. Since these MGS/TES images have already been atmospherically-corrected, this matching allows us to bootstrap the process of mosaicking the HRSC images without actually atmospherically correcting the HRSC images. In this work, we will adapt this technique of MEx/HRSC Pan images being matched with the MGS/TES mosaic, so that instead, MEx/HRSC RGB images

  7. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  8. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  9. "Multimodal Contrast" from the Multivariate Analysis of Hyperspectral CARS Images

    NASA Astrophysics Data System (ADS)

    Tabarangao, Joel T.

    The typical contrast mechanism employed in multimodal CARS microscopy involves the use of other nonlinear imaging modalities such as two-photon excitation fluorescence (TPEF) microscopy and second harmonic generation (SHG) microscopy to produce a molecule-specific pseudocolor image. In this work, I explore the use of unsupervised multivariate statistical analysis tools such as Principal Component Analysis (PCA) and Vertex Component Analysis (VCA) to provide better contrast using the hyperspectral CARS data alone. Using simulated CARS images, I investigate the effects of the quadratic dependence of CARS signal on concentration on the pixel clustering and classification and I find that a normalization step is necessary to improve pixel color assignment. Using an atherosclerotic rabbit aorta test image, I show that the VCA algorithm provides pseudocolor contrast that is comparable to multimodal imaging, thus showing that much of the information gleaned from a multimodal approach can be sufficiently extracted from the CARS hyperspectral stack itself.

  10. A Mathematical Framework for Image Analysis

    DTIC Science & Technology

    1991-08-01

    The results reported here were derived from the research project ’A Mathematical Framework for Image Analysis ’ supported by the Office of Naval...Research, contract N00014-88-K-0289 to Brown University. A common theme for the work reported is the use of probabilistic methods for problems in image ... analysis and image reconstruction. Five areas of research are described: rigid body recognition using a decision tree/combinatorial approach; nonrigid

  11. An improved level set method for vertebra CT image segmentation

    PubMed Central

    2013-01-01

    Background Clinical diagnosis and therapy for the lumbar disc herniation requires accurate vertebra segmentation. The complex anatomical structure and the degenerative deformations of the vertebrae makes its segmentation challenging. Methods An improved level set method, namely edge- and region-based level set method (ERBLS), is proposed for vertebra CT images segmentation. By considering the gradient information and local region characteristics of images, the proposed model can efficiently segment images with intensity inhomogeneity and blurry or discontinuous boundaries. To reduce the dependency on manual initialization in many active contour models and for an automatic segmentation, a simple initialization method for the level set function is built, which utilizes the Otsu threshold. In addition, the need of the costly re-initialization procedure is completely eliminated. Results Experimental results on both synthetic and real images demonstrated that the proposed ERBLS model is very robust and efficient. Compared with the well-known local binary fitting (LBF) model, our method is much more computationally efficient and much less sensitive to the initial contour. The proposed method has also applied to 56 patient data sets and produced very promising results. Conclusions An improved level set method suitable for vertebra CT images segmentation is proposed. It has the flexibility of segmenting the vertebra CT images with blurry or discontinuous edges, internal inhomogeneity and no need of re-initialization. PMID:23714300

  12. Resolution enhancement for ISAR imaging via improved statistical compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Hongxian; Qiao, Zhi-jun

    2016-12-01

    Developing compressed sensing (CS) theory reveals that optimal reconstruction of an unknown signal can be achieved from very limited observations by utilizing signal sparsity. For inverse synthetic aperture radar (ISAR), the image of an interesting target is generally constructed by limited strong scattering centers, representing strong spatial sparsity. Such prior sparsity intrinsically paves a way to improved ISAR imaging performance. In this paper, we develop a super-resolution algorithm for forming ISAR images from limited observations. When the amplitude of the target scattered field follows an identical Laplace probability distribution, the approach converts super-resolution imaging into sparsity-driven optimization in the Bayesian statistics sense. We show that improved performance is achievable by taking advantage of the meaningful spatial structure of the scattered field. Further, we use the nonidentical Laplace distribution with small scale on strong signal components and large scale on noise to discriminate strong scattering centers from noise. A maximum likelihood estimator combined with a bandwidth extrapolation technique is also developed to estimate the scale parameters. Real measured data processing indicates the proposal can reconstruct the high-resolution image though only limited pulses even with low SNR, which shows advantages over current super-resolution imaging methods.

  13. SAR image segmentation using MSER and improved spectral clustering

    NASA Astrophysics Data System (ADS)

    Gui, Yang; Zhang, Xiaohu; Shang, Yang

    2012-12-01

    A novel approach is presented for synthetic aperture radar (SAR) image segmentation. By incorporating the advantages of maximally stable extremal regions (MSER) algorithm and spectral clustering (SC) method, the proposed approach provides effective and robust segmentation. First, the input image is transformed from a pixel-based to a region-based model by using the MSER algorithm. The input image after MSER procedure is composed of some disjoint regions. Then the regions are treated as nodes in the image plane, and a graph structure is applied to represent them. Finally, the improved SC is used to perform globally optimal clustering, by which the result of image segmentation can be generated. To avoid some incorrect partitioning when considering each region as one graph node, we assign different numbers of nodes to represent the regions according to area ratios among the regions. In addition, K-harmonic means instead of K-means is applied in the improved SC procedure in order to raise its stability and performance. Experimental results show that the proposed approach is effective on SAR image segmentation and has the advantage of calculating quickly.

  14. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  15. Improved QD-BRET conjugates for detection and imaging

    PubMed Central

    Xing, Yun; So, Min-kyung; Koh, Ai-leen; Sinclair, Robert; Rao, Jianghong

    2008-01-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dots bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness & photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holds the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell trafficking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies. PMID:18468518

  16. X-ray holographic microscopy: Improved images of zymogen granules

    SciTech Connect

    Jacobsen, C.; Howells, M.; Kirz, J.; McQuaid, K.; Rothman, S.

    1988-10-01

    Soft x-ray holography has long been considered as a technique for x-ray microscopy. It has been only recently, however, that sub-micron resolution has been obtained in x-ray holography. This paper will concentrate on recent progress we have made in obtaining reconstructed images of improved quality. 15 refs., 6 figs.

  17. Improved QD-BRET conjugates for detection and imaging

    SciTech Connect

    Xing Yun; So, Min-kyung; Koh, Ai Leen; Sinclair, Robert; Rao Jianghong

    2008-08-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dot bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness and photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holding the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell tracking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies.

  18. Analyzing training information from random forests for improved image segmentation.

    PubMed

    Mahapatra, Dwarikanath

    2014-04-01

    Labeled training data are used for challenging medical image segmentation problems to learn different characteristics of the relevant domain. In this paper, we examine random forest (RF) classifiers, their learned knowledge during training and ways to exploit it for improved image segmentation. Apart from learning discriminative features, RFs also quantify their importance in classification. Feature importance is used to design a feature selection strategy critical for high segmentation and classification accuracy, and also to design a smoothness cost in a second-order MRF framework for graph cut segmentation. The cost function combines the contribution of different image features like intensity, texture, and curvature information. Experimental results on medical images show that this strategy leads to better segmentation accuracy than conventional graph cut algorithms that use only intensity information in the smoothness cost.

  19. Image analysis for denoising full-field frequency-domain fluorescence lifetime images.

    PubMed

    Spring, B Q; Clegg, R M

    2009-08-01

    Video-rate fluorescence lifetime-resolved imaging microscopy (FLIM) is a quantitative imaging technique for measuring dynamic processes in biological specimens. FLIM offers valuable information in addition to simple fluorescence intensity imaging; for instance, the fluorescence lifetime is sensitive to the microenvironment of the fluorophore allowing reliable differentiation between concentration differences and dynamic quenching. Homodyne FLIM is a full-field frequency-domain technique for imaging fluorescence lifetimes at every pixel of a fluorescence image simultaneously. If a single modulation frequency is used, video-rate image acquisition is possible. Homodyne FLIM uses a gain-modulated image intensified charge-coupled device (ICCD) detector, which unfortunately is a major contribution to the noise of the measurement. Here we introduce image analysis for denoising homodyne FLIM data. The denoising routine is fast, improves the extraction of the fluorescence lifetime value(s) and increases the sensitivity and fluorescence lifetime resolving power of the FLIM instrument. The spatial resolution (especially the high spatial frequencies not related to noise) of the FLIM image is preserved, because the denoising routine does not blur or smooth the image. By eliminating the random noise known to be specific to photon noise and from the intensifier amplification, the fidelity of the spatial resolution is improved. The polar plot projection, a rapid FLIM analysis method, is used to demonstrate the effectiveness of the denoising routine with exemplary data from both physical and complex biological samples. We also suggest broader impacts of the image analysis for other fluorescence microscopy techniques (e.g. super-resolution imaging).

  20. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  1. [Study of color blood image segmentation based on two-stage-improved FCM algorithm].

    PubMed

    Wang, Bin; Chen, Huaiqing; Huang, Hua; Rao, Jie

    2006-04-01

    This paper introduces a new method for color blood cell image segmentation based on FCM algorithm. By transforming the original blood microscopic image to indexed image, and by doing the colormap, a fuzzy apparoach to obviating the direct clustering of image pixel values, the quantity of data processing and analysis is enormously compressed. In accordance to the inherent features of color blood cell image, the segmentation process is divided into two stages. (1)confirming the number of clusters and initial cluster centers; (2) altering the distance measuring method by the distance weighting matrix in order to improve the clustering veracity. In this way, the problem of difficult convergence of FCM algorithm is solved, the iteration time of iterative convergence is reduced, the execution time of algarithm is decreased, and the correct segmentation of the components of color blood cell image is implemented.

  2. Improving RLRN Image Splicing Detection with the Use of PCA and Kernel PCA

    PubMed Central

    Jalab, Hamid A.; Md Noor, Rafidah

    2014-01-01

    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images. PMID:25295304

  3. Improving RLRN image splicing detection with the Use of PCA and kernel PCA.

    PubMed

    Moghaddasi, Zahra; Jalab, Hamid A; Md Noor, Rafidah; Aghabozorgi, Saeed

    2014-01-01

    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.

  4. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d+ = 0.38), a small-to-medium reduction in beauty ideal internalisation (d+ = -0.37), and a large reduction in social comparison tendencies (d+ = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  5. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  6. Improved space object detection via scintillated short exposure image data

    NASA Astrophysics Data System (ADS)

    Cain, Stephen

    2016-09-01

    Since telescopes were first trained on the heavens, nearly every attempt to locate new objects in the sky has been accomplished using long-exposure images (images taken with exposures longer than one second). For many decades astronomers have utilized short-exposure images to achieve high spatial resolution and mitigate the effect of Earth's atmosphere. This is due to the fact that short-exposure images contain higher spatial frequency content than their long exposure counterparts. New objects, such as dim companions of binary stars have been revealed using these techniques, thus achieving some discoveries via short-exposure images, but always in context of a detailed analysis of a specific area of the sky, not through a synoptic search of the heavens like the kind carried out by asteroid research programs like Space Watch, PAN-STARRS or the Catalina Sky Survey. In this paper simulated short-exposure images are processed to detect simulated stars in the presence of strong atmospheric turbulence. The strong turbulence condition produces long-exposures with wide star patterns, while the short-exposure images feature better concentration of the available photons, but large random position variance. A new processing strategy involving a matched-filter technique using a short-exposure atmospheric impulse response model is utilized. The results from this process are compared to efforts to detect stars in long exposure images taken over the same interval the short-exposure images are collected. The two techniques are compared and the short-exposure imaging technique is found to produce the higher probability of detection.

  7. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  8. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  9. Improving the performance of image classification by Hahn moment invariants.

    PubMed

    Sayyouri, Mhamed; Hmimid, Abdeslam; Qjidaa, Hassan

    2013-11-01

    The discrete orthogonal moments are powerful descriptors for image analysis and pattern recognition. However, the computation of these moments is a time consuming procedure. To solve this problem, a new approach that permits the fast computation of Hahn's discrete orthogonal moments is presented in this paper. The proposed method is based, on the one hand, on the computation of Hahn's discrete orthogonal polynomials using the recurrence relation with respect to the variable x instead of the order n and the symmetry property of Hahn's polynomials and, on the other hand, on the application of an innovative image representation where the image is described by a number of homogenous rectangular blocks instead of individual pixels. The paper also proposes a new set of Hahn's invariant moments under the translation, the scaling, and the rotation of the image. This set of invariant moments is computed as a linear combination of invariant geometric moments from a finite number of image intensity slices. Several experiments are performed to validate the effectiveness of our descriptors in terms of the acceleration of time computation, the reconstruction of the image, the invariability, and the classification. The performance of Hahn's moment invariants used as pattern features for a pattern classification application is compared with Hu [IRE Trans. Inform. Theory 8, 179 (1962)] and Krawchouk [IEEE Trans. Image Process.12, 1367 (2003)] moment invariants.

  10. Improving image accuracy of region-of-interest in cone-beam CT using prior image.

    PubMed

    Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong

    2014-03-06

    In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image.

  11. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  12. Improved image fusion method based on NSCT and accelerated NMF.

    PubMed

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  13. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    PubMed Central

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms. PMID:22778618

  14. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  15. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  16. Imaging flow cytometry for phytoplankton analysis.

    PubMed

    Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S

    2017-01-01

    This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments.

  17. Improved diagonal queue medical image steganography using Chaos theory, LFSR, and Rabin cryptosystem.

    PubMed

    Jain, Mamta; Kumar, Anil; Choudhary, Rishabh Charan

    2016-09-09

    In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.

  18. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  19. Imaging System and Method for Biomedical Analysis

    DTIC Science & Technology

    2013-03-11

    compressive decoding of sparse objects” by A. Coskum et al. 136 Analyst No. 17, pp. 3512–3518, (7 September 2011). Their fluorescent microscopy lensless ...prior art concept is a lensless imaging system proposed in a research paper entitled “ Lensless wide-field fluorescent imaging on a chip using...analysis. [0008] An article published by Sang Jun Moon et al., “Integrating Micro-fluidics and Lensless Imaging for Point-of- Care,” 24 Biosens

  20. Theory of Image Analysis and Recognition.

    DTIC Science & Technology

    1983-01-24

    Narendra Ahuja Image models Ramalingam Chellappa Image models Matti Pietikainen * Texture analysis b David G. Morgenthaler’ 3D digital geometry c Angela Y. Wu...Restoration Parameter Choice A Quantitative Guide," TR-965, October 1980. 70. Matti Pietikainen , "On the Use of Hierarchically Computed ’Mexican Hat...81. Matti Pietikainen and Azriel Rosenfeld, "Image Segmenta- tion by Texture Using Pyramid Node Linking," TR-1008, February 1981. 82. David G. 1

  1. Analysis of dynamic brain imaging data.

    PubMed Central

    Mitra, P P; Pesaran, B

    1999-01-01

    Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: "noise" characterization and suppression, and "signal" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources. PMID:9929474

  2. Improved 3D cellular imaging by multispectral focus assessment

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  3. Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network

    NASA Astrophysics Data System (ADS)

    Kong, Weiwei; Liu, Jianping

    2013-01-01

    A new technique for image fusion based on nonsubsampled shearlet transform (NSST) and improved pulse-coupled neural network (PCNN) is proposed. NSST, as a novel multiscale geometric analysis tool, can be optimally efficient in representing images and capturing the geometric features of multidimensional data. As a result, NSST is introduced into the area of image fusion to complete the decompositions of source images in any scale and any direction. Then the basic PCNN model is improved to be improved PCNN (IPCNN), which is more concise and more effective. IPCNN adopts the contrast of each pixel in images as the linking strength β, and the time matrix T of subimages can be obtained via the synchronous pulse-burst property. By using IPCNN, the fused subimages can be achieved. Finally, the final fused image can be obtained by using inverse NSST. The numerical experiments demonstrate that the new technique presented in this paper is competitive in the field of image fusion in terms of both fusion performance and computational efficiency.

  4. Improved image method for a holographic description of conical defects

    NASA Astrophysics Data System (ADS)

    Aref'eva, I. Ya.; Khramtsov, M. A.; Tikhanovskaya, M. D.

    2016-11-01

    The geodesics prescription in the holographic approach in the Lorentzian signature is applicable only for geodesics connecting spacelike-separated points at the boundary because there are no timelike geodesics that reach the boundary. Also, generally speaking, there is no direct analytic Euclidean continuation for a general background, such as a moving particle in the AdS space. We propose an improved geodesic image method for two-point Lorentzian correlators that is applicable for arbitrary time intervals when the space-time is deformed by point particles. We show that such a prescription agrees with the case where the analytic continuation exists and also with the previously used prescription for quasigeodesics. We also discuss some other applications of the improved image method: holographic entanglement entropy and multiple particles in the AdS3 space.

  5. Improving the efficiency of image guided brachytherapy in cervical cancer

    PubMed Central

    Franklin, Adrian; Ajaz, Mazhar; Stewart, Alexandra

    2016-01-01

    Brachytherapy is an essential component of the treatment of locally advanced cervical cancers. It enables the dose to the tumor to be boosted whilst allowing relative sparing of the normal tissues. Traditionally, cervical brachytherapy was prescribed to point A but since the GEC-ESTRO guidelines were published in 2005, there has been a move towards prescribing the dose to a 3D volume. Image guided brachytherapy has been shown to reduce local recurrence, and improve survival and is optimally predicated on magnetic resonance imaging. Radiological studies, patient workflow, operative parameters, and intensive therapy planning can represent a challenge to clinical resources. This article explores the ways, in which 3D conformal brachytherapy can be implemented and draws findings from recent literature and a well-developed hospital practice in order to suggest ways to improve the efficiency and efficacy of a brachytherapy service. Finally, we discuss relatively underexploited translational research opportunities. PMID:28115963

  6. Improved Starting Materials for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An improved type of starting materials for the fabrication of silicon-based imaging integrated circuits that include back-illuminated photodetectors has been conceived, and a process for making these starting materials is undergoing development. These materials are intended to enable reductions in dark currents and increases in quantum efficiencies, relative to those of comparable imagers made from prior silicon-on-insulator (SOI) starting materials. Some background information is prerequisite to a meaningful description of the improved starting materials and process. A prior SOI starting material, depicted in the upper part the figure, includes: a) A device layer on the front side, typically between 2 and 20 m thick, made of p-doped silicon (that is, silicon lightly doped with an electron acceptor, which is typically boron); b) A buried oxide (BOX) layer (that is, a buried layer of oxidized silicon) between 0.2 and 0.5 m thick; and c) A silicon handle layer (also known as a handle wafer) on the back side, between about 600 and 650 m thick. After fabrication of the imager circuitry in and on the device layer, the handle wafer is etched away, the BOX layer acting as an etch stop. In subsequent operation of the imager, light enters from the back, through the BOX layer. The advantages of back illumination over front illumination have been discussed in prior NASA Tech Briefs articles.

  7. An improved corner detection algorithm for image sequence

    NASA Astrophysics Data System (ADS)

    Yan, Minqi; Zhang, Bianlian; Guo, Min; Tian, Guangyuan; Liu, Feng; Huo, Zeng

    2014-11-01

    A SUSAN corner detection algorithm for a sequence of images is proposed in this paper, The correlation matching algorithm is treated for the coarse positioning of the detection area, after that, SUSAN corner detection is used to obtain interesting points of the target. The SUSAN corner detection has been improved. For the situation that the points of a small area are often detected as corner points incorrectly, the neighbor direction filter is applied to reduce the rate of mistakes. Experiment results show that the algorithm enhances the anti-noise performance, improve the accuracy of detection.

  8. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-05-07

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging.

  9. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  10. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  11. Improvement of material decomposition and image quality in dual-energy radiography by reducing image noise

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-s.; Choi, S.; Lee, H.; Choi, S.; Jo, B. D.; Jeon, P.-H.; Kim, H.; Kim, D.; Kim, H.; Kim, H.-J.

    2016-08-01

    Although digital radiography has been widely used for screening human anatomical structures in clinical situations, it has several limitations due to anatomical overlapping. To resolve this problem, dual-energy imaging techniques, which provide a method for decomposing overlying anatomical structures, have been suggested as alternative imaging techniques. Previous studies have reported several dual-energy techniques, each resulting in different image qualities. In this study, we compared three dual-energy techniques: simple log subtraction (SLS), simple smoothing of a high-energy image (SSH), and anti-correlated noise reduction (ACNR) with respect to material thickness quantification and image quality. To evaluate dual-energy radiography, we conducted Monte Carlo simulation and experimental phantom studies. The Geant 4 Application for Tomographic Emission (GATE) v 6.0 and tungsten anode spectral model using interpolation polynomials (TASMIP) codes were used for simulation studies and digital radiography, and human chest phantoms were used for experimental studies. The results of the simulation study showed improved image contrast-to-noise ratio (CNR) and coefficient of variation (COV) values and bone thickness estimation accuracy by applying the ACNR and SSH methods. Furthermore, the chest phantom images showed better image quality with the SSH and ACNR methods compared to the SLS method. In particular, the bone texture characteristics were well-described by applying the SSH and ACNR methods. In conclusion, the SSH and ACNR methods improved the accuracy of material quantification and image quality in dual-energy radiography compared to SLS. Our results can contribute to better diagnostic capabilities of dual-energy images and accurate material quantification in various clinical situations.

  12. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  13. On image analysis in fractography (Methodological Notes)

    NASA Astrophysics Data System (ADS)

    Shtremel', M. A.

    2015-10-01

    As other spheres of image analysis, fractography has no universal method for information convolution. An effective characteristic of an image is found by analyzing the essence and origin of every class of objects. As follows from the geometric definition of a fractal curve, its projection onto any straight line covers a certain segment many times; therefore, neither a time series (one-valued function of time) nor an image (one-valued function of plane) can be a fractal. For applications, multidimensional multiscale characteristics of an image are necessary. "Full" wavelet series break the law of conservation of information.

  14. Retinal image analysis: concepts, applications and potential.

    PubMed

    Patton, Niall; Aslam, Tariq M; MacGillivray, Thomas; Deary, Ian J; Dhillon, Baljean; Eikelboom, Robert H; Yogesan, Kanagasingam; Constable, Ian J

    2006-01-01

    As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it is the only part of the human circulation that can be directly visualised non-invasively in vivo, readily photographed and subject to digital image analysis. Exciting developments in image processing relevant to ophthalmology over the past 15 years includes the progress being made towards developing automated diagnostic systems for conditions, such as diabetic retinopathy, age-related macular degeneration and retinopathy of prematurity. These diagnostic systems offer the potential to be used in large-scale screening programs, with the potential for significant resource savings, as well as being free from observer bias and fatigue. In addition, quantitative measurements of retinal vascular topography using digital image analysis from retinal photography have been used as research tools to better understand the relationship between the retinal microvasculature and cardiovascular disease. Furthermore, advances in electronic media transmission increase the relevance of using image processing in 'teleophthalmology' as an aid in clinical decision-making, with particular relevance to large rural-based communities. In this review, we outline the principles upon which retinal digital image analysis is based. We discuss current techniques used to automatically detect landmark features of the fundus, such as the optic disc, fovea and blood vessels. We review the use of image analysis in the automated diagnosis of pathology (with particular reference to diabetic retinopathy). We also review its role in defining and performing quantitative measurements of vascular topography

  15. Edge enhanced morphology for infrared image analysis

    NASA Astrophysics Data System (ADS)

    Bai, Xiangzhi; Liu, Haonan

    2017-01-01

    Edge information is one of the critical information for infrared images. Morphological operators have been widely used for infrared image analysis. However, the edge information in infrared image is weak and the morphological operators could not well utilize the edge information of infrared images. To strengthen the edge information in morphological operators, the edge enhanced morphology is proposed in this paper. Firstly, the edge enhanced dilation and erosion operators are given and analyzed. Secondly, the pseudo operators which are derived from the edge enhanced dilation and erosion operators are defined. Finally, the applications for infrared image analysis are shown to verify the effectiveness of the proposed edge enhanced morphological operators. The proposed edge enhanced morphological operators are useful for the applications related to edge features, which could be extended to wide area of applications.

  16. A new ultrasonic transducer for improved contrast nonlinear imaging.

    PubMed

    Bouakaz, Ayache; Cate, Folkert ten; de Jong, Nico

    2004-08-21

    Second harmonic imaging has provided significant improvement in contrast detection over fundamental imaging. This improvement is a result of a higher contrast-to-tissue ratio (CTR) achievable at the second harmonic frequency. Nevertheless, the differentiation between contrast and tissue at the second harmonic frequency is still in many situations cumbersome and contrast detection remains nowadays as one of the main challenges, especially in the capillaries. The reduced CTR is mainly caused by the generation of second harmonic energy from nonlinear propagation effects in tissue, which hence obscures the echoes from contrast bubbles. In a previous study, we demonstrated theoretically that the CTR increases with the harmonic number. Therefore the purpose of our study was to increase the CTR by selectively looking to the higher harmonic frequencies. In order to be able to receive these high frequency components (third up to the fifth harmonic), a new ultrasonic phased array transducer has been constructed. The main advantage of the new design is its wide frequency bandwidth. The new array transducer contains two different types of elements arranged in an interleaved pattern (odd and even elements). This design enables separate transmission and reception modes. The odd elements operate at 2.8 MHz and 80% bandwidth, whereas the even elements have a centre frequency of 900 kHz with a bandwidth of 50%. The probe is connected to a Vivid 5 system (GE-Vingmed) and proper software is developed for driving. The total bandwidth of such a transducer is estimated to be more than 150% which enables higher harmonic imaging at an adequate sensitivity and signal to noise ratio compared to standard medical array transducers. We describe in this paper the design and fabrication of the array transducer. Moreover its acoustic properties are measured and its performances for nonlinear contrast imaging are evaluated in vitro and in vivo. The preliminary results demonstrate the advantages of

  17. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  18. Image Restoration Based on Statistic PSF Modeling for Improving the Astrometry of Space Debris

    NASA Astrophysics Data System (ADS)

    Sun, Rongyu; Jia, Peng

    2017-04-01

    Space debris is a special kind of fast-moving, near-Earth objects, and it is also considered to be an interesting topic in time-domain astronomy. Optical survey is the main technique for observing space debris, which contributes much to the studies of space environment. However, due to the motion of registered objects, image degradation is critical in optical space debris observations, as it affects the efficiency of data reduction and lowers the precision of astrometry. Therefore, the image restoration in the form of deconvolution can be applied to improve the data quality and reduction accuracy. To promote the image processing and optimize the reduction, the image degradation across the field of view is modeled statistically with principal component analysis and the efficient mean point-spread function (PSF) is derived from raw images, which is further used in the image restoration. To test the efficiency and reliability, trial observations were made for both low-Earth orbital and high-Earth orbital objects. The positions of all targets were measured using our novel approach and compared with the reference positions. The performance of image restoration employing our estimated PSF was compared with several competitive approaches. The proposed image restoration outperformed the others so that the influence of image degradation was distinctly reduced, which resulted in a higher signal-to-noise ratio and more precise astrometric measurements.

  19. Image guided dose escalated prostate radiotherapy: still room to improve

    PubMed Central

    Martin, Jarad M; Bayley, Andrew; Bristow, Robert; Chung, Peter; Gospodarowicz, Mary; Menard, Cynthia; Milosevic, Michael; Rosewall, Tara; Warde, Padraig R; Catton, Charles N

    2009-01-01

    Background Prostate radiotherapy (RT) dose escalation has been reported to result in improved biochemical control at the cost of greater late toxicity. We report on the application of 79.8 Gy in 42 fractions of prostate image guided RT (IGRT). The primary objective was to assess 5-year biochemical control and potential prognostic factors by the Phoenix definition. Secondary endpoints included acute and late toxicity by the Radiotherapy Oncology Group (RTOG) scoring scales. Methods From October/2001 and June/2003, 259 men were treated with at least 2-years follow-up. 59 patients had low, 163 intermediate and 37 high risk disease. 43 had adjuvant hormonal therapy (HT), mostly for high- or multiple risk factor intermediate-risk disease (n = 25). They received either 3-dimensional conformal RT (3DCRT, n = 226) or intensity modulated RT (IMRT) including daily on-line IGRT with intraprostatic fiducial markers. Results Median follow-up was 67.8 months (range 24.4-84.7). There was no severe (grade 3-4) acute toxicity, and grade 2 acute gastrointestinal (GI) toxicity was unusual (10.1%). The 5-year incidence of grade 2-3 late GI and genitourinary (GU) toxicity was 13.7% and 12.1%, with corresponding grade 3 figures of 3.5% and 2.0% respectively. HT had an association with an increased risk of grade 2-3 late GI toxicity (11% v 21%, p = 0.018). Using the Phoenix definition for biochemical failure, the 5 year-bNED is 88.4%, 76.5% and 77.9% for low, intermediate and high risk patients respectively. On univariate analysis, T-category and Gleason grade correlated with Phoenix bNED (p = 0.006 and 0.039 respectively). Hormonal therapy was not a significant prognostic factor on uni- or multi-variate analysis. Men with positive prostate biopsies following RT had a lower chance of bNED at 5 years (34.4% v 64.3%; p = 0.147). Conclusion IGRT to 79.8 Gy results in favourable rates of late toxicity compared with published non-IGRT treated cohorts. Future avenues of investigation for

  20. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  1. An improved hyperspectral image classification approach based on ISODATA and SKR method

    NASA Astrophysics Data System (ADS)

    Hong, Pu; Ye, Xiao-feng; Yu, Hui; Zhang, Zhi-jie; Cai, Yu-fei; Tang, Xin; Tang, Wei; Wang, Chensheng

    2016-11-01

    Hyper-spectral images can not only provide spatial information but also a wealth of spectral information. A short list of applications includes environmental mapping, global change research, geological research, wetlands mapping, assessment of trafficability, plant and mineral identification and abundance estimation, crop analysis, and bathymetry. A crucial aspect of hyperspectral image analysis is the identification of materials present in an object or scene being imaged. Classification of a hyperspectral image sequence amounts to identifying which pixels contain various spectrally distinct materials that have been specified by the user. Several techniques for classification of multi-hyperspectral pixels have been used from minimum distance and maximum likelihood classifiers to correlation matched filter-based approaches such as spectral signature matching and the spectral angle mapper. In this paper, an improved hyperspectral images classification algorithm is proposed. In the proposed method, an improved similarity measurement method is applied, in which both the spectrum similarity and space similarity are considered. We use two different weighted matrix to estimate the spectrum similarity and space similarity between two pixels, respectively. And then whether these two pixels represent the same material can be determined. In order to reduce the computational cost the wavelet transform is also applied prior to extract the spectral and space features. The proposed method is tested using hyperspectral imagery collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory. Experimental results the efficiency of this new method on hyperspectral images associated with space object material identification.

  2. EPA guidance on improving the image of psychiatry.

    PubMed

    Möller-Leimkühler, A M; Möller, H-J; Maier, W; Gaebel, W; Falkai, P

    2016-03-01

    This paper explores causes, explanations and consequences of the negative image of psychiatry and develops recommendations for improvement. It is primarily based on a WPA guidance paper on how to combat the stigmatization of psychiatry and psychiatrists and a Medline search on related publications since 2010. Furthermore, focussing on potential causes and explanations, the authors performed a selective literature search regarding additional image-related issues such as mental health literacy and diagnostic and treatment issues. Underestimation of psychiatry results from both unjustified prejudices of the general public, mass media and healthcare professionals and psychiatry's own unfavourable coping with external and internal concerns. Issues related to unjustified devaluation of psychiatry include overestimation of coercion, associative stigma, lack of public knowledge, need to simplify complex mental issues, problem of the continuum between normality and psychopathology, competition with medical and non-medical disciplines and psychopharmacological treatment. Issues related to psychiatry's own contribution to being underestimated include lack of a clear professional identity, lack of biomarkers supporting clinical diagnoses, limited consensus about best treatment options, lack of collaboration with other medical disciplines and low recruitment rates among medical students. Recommendations are proposed for creating and representing a positive self-concept with different components. The negative image of psychiatry is not only due to unfavourable communication with the media, but is basically a problem of self-conceptualization. Much can be improved. However, psychiatry will remain a profession with an exceptional position among the medical disciplines, which should be seen as its specific strength.

  3. Systematic infrared image quality improvement using deep learning based techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huaizhong; Casaseca-de-la-Higuera, Pablo; Luo, Chunbo; Wang, Qi; Kitchin, Matthew; Parmley, Andrew; Monge-Alvarez, Jesus

    2016-10-01

    Infrared thermography (IRT, or thermal video) uses thermographic cameras to detect and record radiation in the longwavelength infrared range of the electromagnetic spectrum. It allows sensing environments beyond the visual perception limitations, and thus has been widely used in many civilian and military applications. Even though current thermal cameras are able to provide high resolution and bit-depth images, there are significant challenges to be addressed in specific applications such as poor contrast, low target signature resolution, etc. This paper addresses quality improvement in IRT images for object recognition. A systematic approach based on image bias correction and deep learning is proposed to increase target signature resolution and optimise the baseline quality of inputs for object recognition. Our main objective is to maximise the useful information on the object to be detected even when the number of pixels on target is adversely small. The experimental results show that our approach can significantly improve target resolution and thus helps making object recognition more efficient in automatic target detection/recognition systems (ATD/R).

  4. Visible and infrared reflectance imaging spectroscopy of paintings: pigment mapping and improved infrared reflectography

    NASA Astrophysics Data System (ADS)

    Delaney, John K.; Zeibel, Jason G.; Thoury, Mathieu; Littleton, Roy; Morales, Kathryn M.; Palmer, Michael; de la Rie, E. René

    2009-07-01

    Reflectance imaging spectroscopy, the collection of images in narrow spectral bands, has been developed for remote sensing of the Earth. In this paper we present findings on the use of imaging spectroscopy to identify and map artist pigments as well as to improve the visualization of preparatory sketches. Two novel hyperspectral cameras, one operating from the visible to near-infrared (VNIR) and the other in the shortwave infrared (SWIR), have been used to collect diffuse reflectance spectral image cubes on a variety of paintings. The resulting image cubes (VNIR 417 to 973 nm, 240 bands, and SWIR 970 to 1650 nm, 85 bands) were calibrated to reflectance and the resulting spectra compared with results from a fiber optics reflectance spectrometer (350 to 2500 nm). The results show good agreement between the spectra acquired with the hyperspectral cameras and those from the fiber reflectance spectrometer. For example, the primary blue pigments and their distribution in Picasso's Harlequin Musician (1924) are identified from the reflectance spectra and agree with results from X-ray fluorescence data and dispersed sample analysis. False color infrared reflectograms, obtained from the SWIR hyperspectral images, of extensively reworked paintings such as Picasso's The Tragedy (1903) are found to give improved visualization of changes made by the artist. These results show that including the NIR and SWIR spectral regions along with the visible provides for a more robust identification and mapping of artist pigments than using visible imaging spectroscopy alone.

  5. Improving reliability of live/dead cell counting through automated image mosaicing.

    PubMed

    Piccinini, Filippo; Tesei, Anna; Paganelli, Giulia; Zoli, Wainer; Bevilacqua, Alessandro

    2014-12-01

    Cell counting is one of the basic needs of most biological experiments. Numerous methods and systems have been studied to improve the reliability of counting. However, at present, manual cell counting performed with a hemocytometer still represents the gold standard, despite several problems limiting reproducibility and repeatability of the counts and, at the end, jeopardizing their reliability in general. We present our own approach based on image processing techniques to improve counting reliability. It works in two stages: first building a high-resolution image of the hemocytometer's grid, then counting the live and dead cells by tagging the image with flags of different colours. In particular, we introduce GridMos (http://sourceforge.net/p/gridmos), a fully-automated mosaicing method to obtain a mosaic representing the whole hemocytometer's grid. In addition to offering more significant statistics, the mosaic "freezes" the culture status, thus permitting analysis by more than one operator. Finally, the mosaic achieved can thus be tagged by using an image editor, thus markedly improving counting reliability. The experiments performed confirm the improvements brought about by the proposed counting approach in terms of both reproducibility and repeatability, also suggesting the use of a mosaic of an entire hemocytometer's grid, then labelled trough an image editor, as the best likely candidate for the new gold standard method in cell counting.

  6. Improving image classification in a complex wetland ecosystem through image fusion techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit; Sinha, Priyakant; Taylor, Subhashni

    2014-01-01

    The aim of this study was to evaluate the impact of image fusion techniques on vegetation classification accuracies in a complex wetland system. Fusion of panchromatic (PAN) and multispectral (MS) Quickbird satellite imagery was undertaken using four image fusion techniques: Brovey, hue-saturation-value (HSV), principal components (PC), and Gram-Schmidt (GS) spectral sharpening. These four fusion techniques were compared in terms of their mapping accuracy to a normal MS image using maximum-likelihood classification (MLC) and support vector machine (SVM) methods. Gram-Schmidt fusion technique yielded the highest overall accuracy and kappa value with both MLC (67.5% and 0.63, respectively) and SVM methods (73.3% and 0.68, respectively). This compared favorably with the accuracies achieved using the MS image. Overall, improvements of 4.1%, 3.6%, 5.8%, 5.4%, and 7.2% in overall accuracies were obtained in case of SVM over MLC for Brovey, HSV, GS, PC, and MS images, respectively. Visual and statistical analyses of the fused images showed that the Gram-Schmidt spectral sharpening technique preserved spectral quality much better than the principal component, Brovey, and HSV fused images. Other factors, such as the growth stage of species and the presence of extensive background water in many parts of the study area, had an impact on classification accuracies.

  7. Improving PET spatial resolution and detectability for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Bal, H.; Guerin, L.; Casey, M. E.; Conti, M.; Eriksson, L.; Michel, C.; Fanti, S.; Pettinato, C.; Adler, S.; Choyke, P.

    2014-08-01

    Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%.

  8. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  9. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-03-13

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.

  10. Conducting a SWOT Analysis for Program Improvement

    ERIC Educational Resources Information Center

    Orr, Betsy

    2013-01-01

    A SWOT (strengths, weaknesses, opportunities, and threats) analysis of a teacher education program, or any program, can be the driving force for implementing change. A SWOT analysis is used to assist faculty in initiating meaningful change in a program and to use the data for program improvement. This tool is useful in any undergraduate or degree…

  11. Rock fracture image acquisition and analysis

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zongpu, Jia; Chen, Liwan

    2007-12-01

    As a cooperation project between Sweden and China, this paper presents: rock fracture image acquisition and analysis. Rock fracture images are acquired by using UV light illumination and visible optical illumination. To present fracture network reasonable, we set up some models to characterize the network, based on the models, we used Best fit Ferret method to auto-determine fracture zone, then, through skeleton fractures to obtain endpoints, junctions, holes, particles, and branches. Based on the new parameters and a part of common parameters, the fracture network density, porosity, connectivity and complexities can be obtained, and the fracture network is characterized. In the following, we first present a basic consideration and basic parameters for fractures (Primary study of characteristics of rock fractures), then, set up a model for fracture network analysis (Fracture network analysis), consequently to use the model to analyze fracture network with different images (Two dimensional fracture network analysis based on slices), and finally give conclusions and suggestions.

  12. Embedded signal approach to image texture reproduction analysis

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Baxter, Donald

    2014-01-01

    Since image processing aimed at reducing image noise can also remove important texture, standard methods for evaluating the capture and retention of image texture are currently being developed. Concurrently, the evolution of the intelligence and performance of camera noise-reduction (NR) algorithms poses a challenge for these protocols. Many NR algorithms are `content-aware', which can lead to different levels of NR being applied to various regions within the same digital image. We review the requirements for improved texture measurement. The challenge is to evaluate image signal (texture) content without having a test signal interfere with the processing of the natural scene. We describe an approach to texture reproduction analysis that uses embedded periodic test signals within image texture regions. We describe a target that uses natural image texture combined with a multi-frequency periodic signal. This low-amplitude signal region is embedded in the texture image. Two approaches for embedding periodic test signals in image texture are described. The stacked sine-wave method uses a single combined, or stacked, region with several frequency components. The second method uses a low-amplitude version of the IEC-61146-1 sine-wave multi-burst chart, combined with image texture. A 3x3 grid of smaller regions, each with a single frequency, constitutes the test target. Both methods were evaluated using a simulated digital camera capture-path that included detector noise and optical MTF, for a range of camera exposure/ISO settings. Two types of image texture were used with the method, natural grass and a computed `dead-leaves' region composed of random circles. The embedded-signal methods tested for accuracy with respect to image noise over a wide range of levels, and then further in an evaluation of an adaptive noise-reduction image processing.

  13. Actual drawing of histological images improves knowledge retention.

    PubMed

    Balemans, Monique C M; Kooloos, Jan G M; Donders, A Rogier T; Van der Zee, Catharina E E M

    2016-01-01

    Medical students have to process a large amount of information during the first years of their study, which has to be retained over long periods of nonuse. Therefore, it would be beneficial when knowledge is gained in a way that promotes long-term retention. Paper-and-pencil drawings for the uptake of form-function relationships of basic tissues has been a teaching tool for a long time, but now seems to be redundant with virtual microscopy on computer-screens and printers everywhere. Several studies claimed that, apart from learning from pictures, actual drawing of images significantly improved knowledge retention. However, these studies applied only immediate post-tests. We investigated the effects of actual drawing of histological images, using randomized cross-over design and different retention periods. The first part of the study concerned esophageal and tracheal epithelium, with 384 medical and biomedical sciences students randomly assigned to either the drawing or the nondrawing group. For the second part of the study, concerning heart muscle cells, students from the previous drawing group were now assigned to the nondrawing group and vice versa. One, four, and six weeks after the experimental intervention, the students were given a free recall test and a questionnaire or drawing exercise, to determine the amount of knowledge retention. The data from this study showed that knowledge retention was significantly improved in the drawing groups compared with the nondrawing groups, even after four or six weeks. This suggests that actual drawing of histological images can be used as a tool to improve long-term knowledge retention.

  14. Single particle raster image analysis of diffusion.

    PubMed

    Longfils, M; Schuster, E; Lorén, N; Särkkä, A; Rudemo, M

    2017-04-01

    As a complement to the standard RICS method of analysing Raster Image Correlation Spectroscopy images with estimation of the image correlation function, we introduce the method SPRIA, Single Particle Raster Image Analysis. Here, we start by identifying individual particles and estimate the diffusion coefficient for each particle by a maximum likelihood method. Averaging over the particles gives a diffusion coefficient estimate for the whole image. In examples both with simulated and experimental data, we show that the new method gives accurate estimates. It also gives directly standard error estimates. The method should be possible to extend to study heterogeneous materials and systems of particles with varying diffusion coefficient, as demonstrated in a simple simulation example. A requirement for applying the SPRIA method is that the particle concentration is low enough so that we can identify the individual particles. We also describe a bootstrap method for estimating the standard error of standard RICS.

  15. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  16. Improved proton computed tomography by dual modality image reconstruction

    SciTech Connect

    Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  17. Functional data analysis in brain imaging studies.

    PubMed

    Tian, Tian Siva

    2010-01-01

    Functional data analysis (FDA) considers the continuity of the curves or functions, and is a topic of increasing interest in the statistics community. FDA is commonly applied to time-series and spatial-series studies. The development of functional brain imaging techniques in recent years made it possible to study the relationship between brain and mind over time. Consequently, an enormous amount of functional data is collected and needs to be analyzed. Functional techniques designed for these data are in strong demand. This paper discusses three statistically challenging problems utilizing FDA techniques in functional brain imaging analysis. These problems are dimension reduction (or feature extraction), spatial classification in functional magnetic resonance imaging studies, and the inverse problem in magneto-encephalography studies. The application of FDA to these issues is relatively new but has been shown to be considerably effective. Future efforts can further explore the potential of FDA in functional brain imaging studies.

  18. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  19. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  20. The electronic image stabilization technology research based on improved optical-flow motion vector estimation

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng

    2016-01-01

    The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.

  1. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  2. On Two-Dimensional ARMA Models for Image Analysis.

    DTIC Science & Technology

    1980-03-24

    2-D ARMA models for image analysis . Particular emphasis is placed on restoration of noisy images using 2-D ARMA models. Computer results are...is concluded that the models are very effective linear models for image analysis . (Author)

  3. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains.

  4. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  5. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy.

    PubMed

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B

    2015-02-10

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.

  6. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy

    PubMed Central

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B.

    2015-01-01

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution. PMID:25665933

  7. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B.

    2015-02-01

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.

  8. Selecting an image analysis minicomputer system

    NASA Technical Reports Server (NTRS)

    Danielson, R.

    1981-01-01

    Factors to be weighed when selecting a minicomputer system as the basis for an image analysis computer facility vary depending on whether the user organization procures a new computer or selects an existing facility to serve as an image analysis host. Some conditions not directly related to hardware or software should be considered such as the flexibility of the computer center staff, their encouragement of innovation, and the availability of the host processor to a broad spectrum of potential user organizations. Particular attention must be given to: image analysis software capability; the facilities of a potential host installation; the central processing unit; the operating system and languages; main memory; disk storage; tape drives; hardcopy output; and other peripherals. The operational environment, accessibility; resource limitations; and operational supports are important. Charges made for program execution and data storage must also be examined.

  9. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  10. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  11. Measurements and analysis of active/passive multispectral imaging

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Hamoir, Dominique; Steinvall, Ove; Larsson, Hâkan; Amselem, Elias; Lutzmann, Peter; Repasi, Endre; Göhler, Benjamin; Barbé, Stéphane; Vaudelin, Olivier; Fracès, Michel; Tanguy, Bernard; Thouin, Emmanuelle

    2013-10-01

    This paper describes a data collection on passive and active imaging and the preliminary analysis. It is part of an ongoing work on active and passive imaging for target identification using different wavelength bands. We focus on data collection at NIR-SWIR wavelengths but we also include the visible and the thermal region. Active imaging in NIRSWIR will support the passive imaging by eliminating shadows during day-time and allow night operation. Among the applications that are most likely for active multispectral imaging, we focus on long range human target identification. We also study the combination of active and passive sensing. The target scenarios of interest include persons carrying different objects and their associated activities. We investigated laser imaging for target detection and classification up to 1 km assuming that another cueing sensor - passive EO and/or radar - is available for target acquisition and detection. Broadband or multispectral operation will reduce the effects of target speckle and atmospheric turbulence. Longer wavelengths will improve performance in low visibility conditions due to haze, clouds and fog. We are currently performing indoor and outdoor tests to further investigate the target/background phenomena that are emphasized in these wavelengths. We also investigate how these effects can be used for target identification and image fusion. Performed field tests and the results of preliminary data analysis are reported.

  12. Multispectral Imager With Improved Filter Wheel and Optics

    NASA Technical Reports Server (NTRS)

    Bremer, James C.

    2007-01-01

    Figure 1 schematically depicts an improved multispectral imaging system of the type that utilizes a filter wheel that contains multiple discrete narrow-band-pass filters and that is rotated at a constant high speed to acquire images in rapid succession in the corresponding spectral bands. The improvement, relative to prior systems of this type, consists of the measures taken to prevent the exposure of a focal-plane array (FPA) of photodetectors to light in more than one spectral band at any given time and to prevent exposure of the array to any light during readout. In prior systems, these measures have included, variously the use of mechanical shutters or the incorporation of wide opaque sectors (equivalent to mechanical shutters) into filter wheels. These measures introduce substantial dead times into each operating cycle intervals during which image information cannot be collected and thus incoming light is wasted. In contrast, the present improved design does not involve shutters or wide opaque sectors, and it reduces dead times substantially. The improved multispectral imaging system is preceded by an afocal telescope and includes a filter wheel positioned so that its rotation brings each filter, in its turn, into the exit pupil of the telescope. The filter wheel contains an even number of narrow-band-pass filters separated by narrow, spoke-like opaque sectors. The geometric width of each filter exceeds the cross-sectional width of the light beam coming out of the telescope. The light transmitted by the sequence of narrow-band filters is incident on a dichroic beam splitter that reflects in a broad shorter-wavelength spectral band that contains half of the narrow bands and transmits in a broad longer-wavelength spectral band that contains the other half of the narrow spectral bands. The filters are arranged on the wheel so that if the pass band of a given filter is in the reflection band of the dichroic beam splitter, then the pass band of the adjacent filter

  13. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  14. Creating the "desired mindset": Philip Morris's efforts to improve its corporate image among women.

    PubMed

    McDaniel, Patricia A; Malone, Ruth E

    2009-01-01

    Through analysis of tobacco company documents, we explored how and why Philip Morris sought to enhance its corporate image among American women. Philip Morris regarded women as an influential political group. To improve its image among women, while keeping tobacco off their organizational agendas, the company sponsored women's groups and programs. It also sought to appeal to women it defined as "active moms" by advertising its commitment to domestic violence victims. It was more successful in securing women's organizations as allies than active moms. Increasing tobacco's visibility as a global women's health issue may require addressing industry influence.

  15. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    SciTech Connect

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  16. Automated eXpert Spectral Image Analysis

    SciTech Connect

    Keenan, Michael R.

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limted number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X-ray Fluorescence

  17. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future.

  18. Methods of Improving a Digital Image Having White Zones

    NASA Technical Reports Server (NTRS)

    Wodell, Glenn A. (Inventor); Jobson, Daniel J. (Inventor); Rahman, Zia-Ur (Inventor)

    2005-01-01

    The present invention is a method of processing a digital image that is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I,(x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with SIGMA (sup N)(sub n=1)W(sub n)(log I(sub i)(x,y)-log[I(sub i)(x,y)*F(sub n)(x,y)]), i = 1,...,S where W(sub n) is a weighting factor, "*" is the convolution operator and S is the total number of unique spectral bands. For each n, the function F(sub n)(x,y) is a unique surround function applied to each position (x,y) and N is the total number of unique surround functions. Each unique surround function is scaled to improve some aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band of the image is then filtered with a filter function to generate a filtered intensity value R(sub i)(x,y). To Prevent graying of white zones in the image, the maximum of the original intensity value I(sub i)(x,y) and filtered intensity value R(sub i)(x,y) is selected for display.

  19. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results

  20. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2007-11-02

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  1. Improving sensitivity in nonlinear Raman microspectroscopy imaging and sensing

    PubMed Central

    Arora, Rajan; Petrov, Georgi I.; Liu, Jian; Yakovlev, Vladislav V.

    2011-01-01

    Nonlinear Raman microspectroscopy based on a broadband coherent anti-Stokes Raman scattering is an emerging technique for noninvasive, chemically specific, microscopic analysis of tissues and large population of cells and particles. The sensitivity of this imaging is a critical aspect of a number of the proposed biomedical application. It is shown that the incident laser power is the major parameter controlling this sensitivity. By careful optimizing the laser system, the high-quality vibrational spectra acquisition at the multi-kHz rate becomes feasible. PMID:21361677

  2. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  3. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  4. An improved viscous characteristics analysis program

    NASA Technical Reports Server (NTRS)

    Jenkins, R. V.

    1978-01-01

    An improved two dimensional characteristics analysis program is presented. The program is built upon the foundation of a FORTRAN program entitled Analysis of Supersonic Combustion Flow Fields With Embedded Subsonic Regions. The major improvements are described and a listing of the new program is provided. The subroutines and their functions are given as well as the input required for the program. Several applications of the program to real problems are qualitatively described. Three runs obtained in the investigation of a real problem are presented to provide insight for the input and output of the program.

  5. Improving transient analysis technology for aircraft structures

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Chargin, Mladen

    1989-01-01

    Aircraft dynamic analyses are demanding of computer simulation capabilities. The modeling complexities of semi-monocoque construction, irregular geometry, high-performance materials, and high-accuracy analysis are present. At issue are the safety of the passengers and the integrity of the structure for a wide variety of flight-operating and emergency conditions. The technology which supports engineering of aircraft structures using computer simulation is examined. Available computer support is briefly described and improvement of accuracy and efficiency are recommended. Improved accuracy of simulation will lead to a more economical structure. Improved efficiency will result in lowering development time and expense.

  6. Multilocus Genetic Analysis of Brain Images

    PubMed Central

    Hibar, Derrek P.; Kohannim, Omid; Stein, Jason L.; Chiang, Ming-Chang; Thompson, Paul M.

    2011-01-01

    The quest to identify genes that influence disease is now being extended to find genes that affect biological markers of disease, or endophenotypes. Brain images, in particular, provide exquisitely detailed measures of anatomy, function, and connectivity in the living brain, and have identified characteristic features for many neurological and psychiatric disorders. The emerging field of imaging genomics is discovering important genetic variants associated with brain structure and function, which in turn influence disease risk and fundamental cognitive processes. Statistical approaches for testing genetic associations are not straightforward to apply to brain images because the data in brain images is spatially complex and generally high dimensional. Neuroimaging phenotypes typically include 3D maps across many points in the brain, fiber tracts, shape-based analyses, and connectivity matrices, or networks. These complex data types require new methods for data reduction and joint consideration of the image and the genome. Image-wide, genome-wide searches are now feasible, but they can be greatly empowered by sparse regression or hierarchical clustering methods that isolate promising features, boosting statistical power. Here we review the evolution of statistical approaches to assess genetic influences on the brain. We outline the current state of multivariate statistics in imaging genomics, and future directions, including meta-analysis. We emphasize the power of novel multivariate approaches to discover reliable genetic influences with small effect sizes. PMID:22303368

  7. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  8. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  9. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  10. Novel driver method to improve ordinary CCD frame rate for high-speed imaging diagnosis

    NASA Astrophysics Data System (ADS)

    Luo, Tong-Ding; Li, Bin-Kang; Yang, Shao-Hua; Guo, Ming-An; Yan, Ming

    2016-06-01

    The use of ordinary Charge-coupled-Device (CCD) imagers for the analysis of fast physical phenomenon is restricted because of the low-speed performance resulting from their long output times. Even though the form of Intensified-CCD (ICCD), coupled with a gated image intensifier, has extended their use for high speed imaging, the deficiency remains to be solved that ICDD could record only one image in a single shot. This paper presents a novel driver method designed to significantly improve the ordinary interline CCD burst frame rate for high-speed photography. This method is based on the use of vertical registers as storage, so that a small number of additional frames comprised of reduced-spatial-resolution images obtained via a specific sampling operation can be buffered. Hence, the interval time of the received series of images is related to the exposure and vertical transfer times only and, thus, the burst frame rate can be increased significantly. A prototype camera based on this method is designed as part of this study, exhibiting a burst rate of up to 250,000 frames per second (fps) and a capacity to record three continuous images. This device exhibits a speed enhancement of approximately 16,000 times compared with the conventional speed, with a spatial resolution reduction of only 1/4.

  11. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  12. Low drive field amplitude for improved image resolution in magnetic particle imaging

    PubMed Central

    Croft, Laura R.; Goodwill, Patrick W.; Konkle, Justin J.; Arami, Hamed; Price, Daniel A.; Li, Ada X.; Saritas, Emine U.; Conolly, Steven M.

    2016-01-01

    Purpose: Magnetic particle imaging (MPI) is a new imaging technology that directly detects superparamagnetic iron oxide nanoparticles. The technique has potential medical applications in angiography, cell tracking, and cancer detection. In this paper, the authors explore how nanoparticle relaxation affects image resolution. Historically, researchers have analyzed nanoparticle behavior by studying the time constant of the nanoparticle physical rotation. In contrast, in this paper, the authors focus instead on how the time constant of nanoparticle rotation affects the final image resolution, and this reveals nonobvious conclusions for tailoring MPI imaging parameters for optimal spatial resolution. Methods: The authors first extend x-space systems theory to include nanoparticle relaxation. The authors then measure the spatial resolution and relative signal levels in an MPI relaxometer and a 3D MPI imager at multiple drive field amplitudes and frequencies. Finally, these image measurements are used to estimate relaxation times and nanoparticle phase lags. Results: The authors demonstrate that spatial resolution, as measured by full-width at half-maximum, improves at lower drive field amplitudes. The authors further determine that relaxation in MPI can be approximated as a frequency-independent phase lag. These results enable the authors to accurately predict MPI resolution and sensitivity across a wide range of drive field amplitudes and frequencies. Conclusions: To balance resolution, signal-to-noise ratio, specific absorption rate, and magnetostimulation requirements, the drive field can be a low amplitude and high frequency. Continued research into how the MPI drive field affects relaxation and its adverse effects will be crucial for developing new nanoparticles tailored to the unique physics of MPI. Moreover, this theory informs researchers how to design scanning sequences to minimize relaxation-induced blurring for better spatial resolution or to exploit

  13. A grayscale image color transfer method based on region texture analysis using GLCM

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanmeng; Wang, Lingxue; Jin, Weiqi; Luo, Yuan; Li, Jiakun

    2011-08-01

    In order to improve the performance of grayscale image colorization based on color transfer, this paper proposes a novel method by which pixels are matched accurately between images through region texture analysis using Gray Level Co-occurrence Matrix (GLCM). This method consists of six steps: reference image selection, color space transformation, grayscale linear transformation and compression, texture analysis using GLCM, pixel matching through texture value comparison, and color value transfer between pixels. We applied this method to kinds of grayscale images, and they gained natural color appearance like the reference images. Experimental results proved that this method is more effective than conventional method in accurately transferring color to grayscale images.

  14. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  15. Improved site contamination through time-lapse complex resistivity imaging

    NASA Astrophysics Data System (ADS)

    Flores Orozco, Adrian; Kemna, Andreas; Cassiani, Giorgio; Binley, Andrew

    2016-04-01

    In the framework of the EU FP7 project ModelPROBE, time-lapse complex resistivity (CR) measurements were conducted at a test site close to Trecate (NW Italy). The objective was to investigate the capabilities of the CR imaging method to delineate the geometry and dynamics of subsurface hydrocarbon contaminant plume which resulted from a crude oil spill in 1994. To achieve this it is required to discriminate the electrical signal associated to static (i.e., lithology) from dynamic changes in the subsurface, with the latter associated to significant seasonal groundwater fluctuations. Previous studies have demonstrated the benefits of the CR method to gain information which is not accessible with common electrical resistivity tomography. However field applications are still rarely and neither the analysis of the data error for CR time-lapse measurements, nor the inversion itself haven not received enough attention. While the ultimate objective at the site is to characterize, here we address the discrimination of the lithological and hydrological controls on the IP response by considering data collected in an uncontaminated area of the site. In this study we demonstrate that an adequate error description of CR measurements provides images free of artifacts and quantitative superior than previous approaches. Based on this approach, differential images computed for time-lapse data exhibited anomalies well correlated with spatiotemporal changes correlated to seasonal fluctuations in the groundwater level. The proposed analysis may be useful in the characterization of fate and transport of hydrocarbon contaminants relevant for the site, which presents areas contaminated with crude oil.

  16. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  17. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  18. Digital image analysis of haematopoietic clusters.

    PubMed

    Benzinou, A; Hojeij, Y; Roudot, A-C

    2005-02-01

    Counting and differentiating cell clusters is a tedious task when performed with a light microscope. Moreover, biased counts and interpretation are difficult to avoid because of the difficulties to evaluate the limits between different types of clusters. Presented here, is a computer-based application able to solve these problems. The image analysis system is entirely automatic, from the stage screening, to the statistical analysis of the results of each experimental plate. Good correlations are found with measurements made by a specialised technician.

  19. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    PubMed

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-01-12

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (HICL-Histology Image Collection Library) comprising 3831 histological images of three different diseases, for fostering research in histopathology image processing, analysis and computer-aided diagnosis. Raw data comprised 93, 116 and 55 cases of brain, breast and laryngeal cancer respectively collected from the archives of the University Hospital of Patras, Greece. The 3831 images were generated from the most representative regions of the pathology, specified by an experienced histopathologist. The HICL Image Collection is free for access under an academic license at http://medisp.bme.teiath.gr/hicl/ . Potential exploitations of the proposed library may span over a board spectrum, such as in image processing to improve visualization, in segmentation for nuclei detection, in decision support systems for second opinion consultations, in statistical analysis for investigation of potential correlations between clinical annotations and imaging findings and, generally, in fostering research on histopathology image processing and analysis. To the best of our knowledge, the HICL constitutes the first attempt towards creation of a reference image collection library in the field of traditional histopathology, publicly and freely available to the scientific community.

  20. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  1. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  2. Visualization of parameter space for image analysis.

    PubMed

    Pretorius, A Johannes; Bray, Mark-Anthony P; Carpenter, Anne E; Ruddle, Roy A

    2011-12-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.

  3. Co-registration of ultrasound and frequency-domain photoacoustic radar images and image improvement for tumor detection

    NASA Astrophysics Data System (ADS)

    Dovlo, Edem; Lashkari, Bahman; Choi, Sung soo Sean; Mandelis, Andreas

    2015-03-01

    This paper demonstrates the co-registration of ultrasound (US) and frequency domain photoacoustic radar (FD-PAR) images with significant image improvement from applying image normalization, filtering and amplification techniques. Achieving PA imaging functionality on a commercial Ultrasound instrument could accelerate clinical acceptance and use. Experimental results presented demonstrate live animal testing and show enhancements in signal-to-noise ratio (SNR), contrast and spatial resolution. The co-registered image produced from the US and phase PA images, provides more information than both images independently.

  4. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  5. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  6. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  7. “Lucky Averaging”: Quality improvement on Adaptive Optics Scanning Laser Ophthalmoscope Images

    PubMed Central

    Huang, Gang; Zhong, Zhangyi; Zou, Weiyao; Burns, Stephen A.

    2012-01-01

    Adaptive optics(AO) has greatly improved retinal image resolution. However, even with AO, temporal and spatial variations in image quality still occur due to wavefront fluctuations, intra-frame focus shifts and other factors. As a result, aligning and averaging images can produce a mean image that has lower resolution or contrast than the best images within a sequence. To address this, we propose an image post-processing scheme called “lucky averaging”, analogous to lucky imaging (Fried, 1978) based on computing the best local contrast over time. Results from eye data demonstrate improvements in image quality. PMID:21964097

  8. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, Stanley W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution.

  9. Collimator application for microchannel plate image intensifier resolution improvement

    DOEpatents

    Thomas, S.W.

    1996-02-27

    A collimator is included in a microchannel plate image intensifier (MCPI). Collimators can be useful in improving resolution of MCPIs by eliminating the scattered electron problem and by limiting the transverse energy of electrons reaching the screen. Due to its optical absorption, a collimator will also increase the extinction ratio of an intensifier by approximately an order of magnitude. Additionally, the smooth surface of the collimator will permit a higher focusing field to be employed in the MCP-to-collimator region than is currently permitted in the MCP-to-screen region by the relatively rough and fragile aluminum layer covering the screen. Coating the MCP and collimator surfaces with aluminum oxide appears to permit additional significant increases in the field strength, resulting in better resolution. 2 figs.

  10. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  11. Gun bore flaw image matching based on improved SIFT descriptor

    NASA Astrophysics Data System (ADS)

    Zeng, Luan; Xiong, Wei; Zhai, You

    2013-01-01

    In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor

  12. Improved cancer therapy and molecular imaging with multivalent, multispecific antibodies.

    PubMed

    Sharkey, Robert M; Rossi, Edmund A; Chang, Chien-Hsing; Goldenberg, David M

    2010-02-01

    Antibodies are highly versatile proteins with the ability to be used to target diverse compounds, such as radionuclides for imaging and therapy, or drugs and toxins for therapy, but also can be used unconjugated to elicit therapeutically beneficial responses, usually with minimal toxicity. This update describes a new procedure for forming multivalent and/or multispecific proteins, known as the dock-and-lock (DNL) technique. Developed as a procedure for preparing bispecific antibodies capable of binding divalently to a tumor antigen and monovalently to a radiolabeled hapten-peptide for pretargeted imaging and therapy, this methodology has the flexibility to create a number of other biologic agents of therapeutic interest. A variety of constructs, based on anti-CD20 and CD22 antibodies, have been made, with results showing that multispecific antibodies have very different properties from the respective parental monospecific antibodies. The technique is not restricted to antibody combination, but other biologics, such as interferon-alpha2b, have been prepared. These types of constructs not only allow small biologics to be sustained in the blood longer, but also to be selectively targeted. Thus, DNL technology is a highly flexible platform that can be used to prepare many different types of agents that could further improve cancer detection and therapy.

  13. Improved Cancer Therapy and Molecular Imaging with Multivalent, Multispecific Antibodies

    PubMed Central

    Rossi, Edmund A.; Chang, Chien-Hsing; Goldenberg, David M.

    2010-01-01

    Summation Antibodies are highly versatile proteins with the ability to be used to target diverse compounds, such as radionuclides for imaging and therapy, or drugs and toxins for therapy, but also can be used unconjugated to elicit therapeutically beneficial responses, usually with minimal toxicity. This update describes a new procedure for forming multivalent and/or multispecific proteins, known as the dock-and-lock (DNL) technique. Developed as a procedure for preparing bispecific antibodies capable of binding divalently to a tumor antigen and monovalently to a radiolabeled hapten-peptide for pretargeted imaging and therapy, this methodology has the flexibility to create a number of other biologic agents of therapeutic interest. A variety of constructs, based on anti-CD20 and CD22 antibodies, have been made, with results showing that multispecific antibodies have very different properties from the respective parental monospecific antibodies. The technique is not restricted to antibody combination, but other biologics, such as interferon-α2b, have been prepared. These types of constructs not only allow small biologics to be sustained in the blood longer, but also to be selectively targeted. Thus, DNL technology is a highly flexible platform that can be used to prepare many different types of agents that could further improve cancer detection and therapy. PMID:20187791

  14. Bispecific Antibody Pretargeting for Improving Cancer Imaging and Therapy

    SciTech Connect

    Sharkey, Robert M.

    2005-02-04

    The main objective of this project was to evaluate pretargeting systems that use a bispecific antibody (bsMAb) to improve the detection and treatment of cancer. A bsMAb has specificity to a tumor antigen, which is used to bind the tumor, while the other specificity is to a peptide that can be radiolabeled. Pretargeting is the process by which the unlabeled bsMAb is given first, and after a sufficient time (1-2 days) is given for it to localize in the tumor and clear from the blood, a small molecular weight radiolabeled peptide is given. According to a dynamic imaging study using a 99mTc-labeled peptide, the radiolabeled peptide localizes in the tumor in less than 1 hour, with > 80% of it clearing from the blood and body within this same time. Tumor/nontumor targeting ratios that are nearly 50 times better than that with a directly radiolabeled Fab fragment have been observed (Sharkey et al., ''Signal amplification in molecular imaging by a multivalent bispecific nanobody'' submitted). The bsMAbs used in this project have been composed of 3 antibodies that will target antigens found in colorectal and pancreatic cancers (CEA, CSAp, and MUC1). For the ''peptide binding moiety'' of the bsMAb, we initially examined an antibody directed to DOTA, but subsequently focused on another antibody directed against a novel compound, HSG (histamine-succinyl-glycine).

  15. Image classification based on scheme of principal node analysis

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Ma, Zheng; Xie, Mei

    2016-11-01

    This paper presents a scheme of principal node analysis (PNA) with the aim to improve the representativeness of the learned codebook so as to enhance the classification rate of scene image. Original images are normalized into gray ones and the scale-invariant feature transform (SIFT) descriptors are extracted from each image in the preprocessing stage. Then, the PNA-based scheme is applied to the SIFT descriptors with iteration and selection algorithms. The principal nodes of each image are selected through spatial analysis of the SIFT descriptors with Manhattan distance (L1 norm) and Euclidean distance (L2 norm) in order to increase the representativeness of the codebook. With the purpose of evaluating the performance of our scheme, the feature vector of the image is calculated by two baseline methods after the codebook is constructed. The L1-PNA- and L2-PNA-based baseline methods are tested and compared with different scales of codebooks over three public scene image databases. The experimental results show the effectiveness of the proposed scheme of PNA with a higher categorization rate.

  16. Automated retinal image analysis over the internet.

    PubMed

    Tsai, Chia-Ling; Madore, Benjamin; Leotta, Matthew J; Sofka, Michal; Yang, Gehua; Majerovics, Anna; Tanenbaum, Howard L; Stewart, Charles V; Roysam, Badrinath

    2008-07-01

    Retinal clinicians and researchers make extensive use of images, and the current emphasis is on digital imaging of the retinal fundus. The goal of this paper is to introduce a system, known as retinal image vessel extraction and registration system, which provides the community of retinal clinicians, researchers, and study directors an integrated suite of advanced digital retinal image analysis tools over the Internet. The capabilities include vasculature tracing and morphometry, joint (simultaneous) montaging of multiple retinal fields, cross-modality registration (color/red-free fundus photographs and fluorescein angiograms), and generation of flicker animations for visualization of changes from longitudinal image sequences. Each capability has been carefully validated in our previous research work. The integrated Internet-based system can enable significant advances in retina-related clinical diagnosis, visualization of the complete fundus at full resolution from multiple low-angle views, analysis of longitudinal changes, research on the retinal vasculature, and objective, quantitative computer-assisted scoring of clinical trials imagery. It could pave the way for future screening services from optometry facilities.

  17. Digital imaging analysis to assess scar phenotype.

    PubMed

    Smith, Brian J; Nidey, Nichole; Miller, Steven F; Moreno Uribe, Lina M; Baum, Christian L; Hamilton, Grant S; Wehby, George L; Dunnwald, Martine

    2014-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive, and unbiased assessments of postsurgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue digital images of postsurgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD imaging system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software, and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with intraclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (p ranging from 1.20(-05) to 1.95(-14) ). Physicians' clinical outcome ratings from the same images showed high interobserver variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome.

  18. Digital imaging analysis to assess scar phenotype

    PubMed Central

    Smith, Brian J.; Nidey, Nichole; Miller, Steven F.; Moreno, Lina M.; Baum, Christian L.; Hamilton, Grant S.; Wehby, George L.; Dunnwald, Martine

    2015-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive and unbiased assessments of post-surgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue (RGB) digital images of post-surgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD image system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with interclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (P ranging from 1.20−05 to 1.95−14). Physicians’ clinical outcome ratings from the same images showed high inter-observer variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome. PMID:24635173

  19. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  20. Multi-Scale Fusion for Improved Localization of Malicious Tampering in Digital Images.

    PubMed

    Korus, Paweł; Huang, Jiwu

    2016-03-01

    A sliding window-based analysis is a prevailing mechanism for tampering localization in passive image authentication. It uses existing forensic detectors, originally designed for a full-frame analysis, to obtain the detection scores for individual image regions. One of the main problems with a window-based analysis is its impractically low localization resolution stemming from the need to use relatively large analysis windows. While decreasing the window size can improve the localization resolution, the classification results tend to become unreliable due to insufficient statistics about the relevant forensic features. In this paper, we investigate a multi-scale analysis approach that fuses multiple candidate tampering maps, resulting from the analysis with different windows, to obtain a single, more reliable tampering map with better localization resolution. We propose three different techniques for multi-scale fusion, and verify their feasibility against various reference strategies. We consider a popular tampering scenario with mode-based first digit features to distinguish between singly and doubly compressed regions. Our results clearly indicate that the proposed fusion strategies can successfully combine the benefits of small-scale and large-scale analyses and improve the tampering localization performance.

  1. ALISA: adaptive learning image and signal analysis

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1999-01-01

    ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive statistical learning engine that may be used to detect and classify the surfaces and boundaries of objects in images. The engine has been designed, implemented, and tested at both the George Washington University and the Research Institute for Applied Knowledge Processing in Ulm, Germany over the last nine years with major funding from Robert Bosch GmbH and Lockheed-Martin Corporation. The design of ALISA was inspired by the multi-path cortical- column architecture and adaptive functions of the mammalian visual cortex.

  2. Characterization of microrod arrays by image analysis

    NASA Astrophysics Data System (ADS)

    Hillebrand, Reinald; Grimm, Silko; Giesa, Reiner; Schmidt, Hans-Werner; Mathwig, Klaus; Gösele, Ulrich; Steinhart, Martin

    2009-04-01

    The uniformity of the properties of array elements was evaluated by statistical analysis of microscopic images of array structures, assuming that the brightness of the array elements correlates quantitatively or qualitatively with a microscopically probed quantity. Derivatives and autocorrelation functions of cumulative frequency distributions of the object brightnesses were used to quantify variations in object properties throughout arrays. Thus, different specimens, the same specimen at different stages of its fabrication or use, and different imaging conditions can be compared systematically. As an example, we analyzed scanning electron micrographs of microrod arrays and calculated the percentage of broken microrods.

  3. Recent Advances in Morphological Cell Image Analysis

    PubMed Central

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  4. Diagnostic improvement based on image processing in low extremities inflamation scintigraphy

    NASA Astrophysics Data System (ADS)

    Lyra, M.; Kordolaimi, S.; Salvara, A. L.

    2011-09-01

    The purpose of this study is the improvement in evaluation of the inflammation extent on the scintigraphic imaging, by utilizing statistical indices (Inflammation Projection Ratio - IPR, skewness, kurtosis and Mean Pixel Value - MPV). Image analysis was performed by means of an Interactive Data Language (IDL) tool. Twelve patients were referred for a radionuclide (Tc99m- Leukoscan) scan, by a GE Healthcare gamma camera, on the suspicion of an infectious lesion in the extremities. The findings of the study are that pathological tissues have a higher IPR index (3.12 to 4.32) compared to normal tissue (~ 1). Furthermore, MPV, skewness and kurtosis differ significantly (> 5%) from normal to inflammable extremities. As a conclusion, image processing provides effective information in the structure and facilitates diagnosis semi-quantitatively.

  5. Guided Wave Annular Array Sensor Design for Improved Tomographic Imaging

    NASA Astrophysics Data System (ADS)

    Koduru, Jaya Prakash; Rose, Joseph L.

    2009-03-01

    Guided wave tomography for structural health monitoring is fast emerging as a reliable tool for the detection and monitoring of hotspots in a structure, for any defects arising from corrosion, crack growth etc. To date guided wave tomography has been successfully tested on aircraft wings, pipes, pipe elbows, and weld joints. Structures practically deployed are subjected to harsh environments like exposure to rain, changes in temperature and humidity. A reliable tomography system should take into account these environmental factors to avoid false alarms. The lack of mode control with piezoceramic disk sensors makes it very sensitive to traces of water leading to false alarms. In this study we explore the design of annular array sensors to provide mode control for improved structural tomography, in particular, addressing the false alarm potential of water loading. Clearly defined actuation lines in the phase velocity dispersion curve space are calculated. A dominant in-plane displacement point is found to provide a solution to the water loading problem. The improvement in the tomographic images with the annular array sensors in the presence of water traces is clearly illustrated with a series of experiments. An annular array design philosophy for other problems in NDE/SHM is also discussed.

  6. Improved Signal Chains for Readout of CMOS Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata; Hancock, Bruce; Cunningham, Thomas

    2009-01-01

    An improved generic design has been devised for implementing signal chains involved in readout from complementary metal oxide/semiconductor (CMOS) image sensors and for other readout integrated circuits (ICs) that perform equivalent functions. The design applies to any such IC in which output signal charges from the pixels in a given row are transferred simultaneously into sampling capacitors at the bottoms of the columns, then voltages representing individual pixel charges are read out in sequence by sequentially turning on column-selecting field-effect transistors (FETs) in synchronism with source-follower- or operational-amplifier-based amplifier circuits. The improved design affords the best features of prior source-follower-and operational- amplifier-based designs while overcoming the major limitations of those designs. The limitations can be summarized as follows: a) For a source-follower-based signal chain, the ohmic voltage drop associated with DC bias current flowing through the column-selection FET causes unacceptable voltage offset, nonlinearity, and reduced small-signal gain. b) For an operational-amplifier-based signal chain, the required bias current and the output noise increase superlinearly with size of the pixel array because of a corresponding increase in the effective capacitance of the row bus used to couple the sampled column charges to the operational amplifier. The effect of the bus capacitance is to simultaneously slow down the readout circuit and increase noise through the Miller effect.

  7. Improvement of X-ray Imaging Crystal Spectrometers for KSTAR

    NASA Astrophysics Data System (ADS)

    Lee, Sang Gon; Bitter, M.; Nam, U. W.; Moon, M. K.

    2005-10-01

    The X-ray imaging crystal spectrometers for the KSTAR tokamak will provide spatially and temporally resolved spectra of the resonance line of helium-like argon (or krypton) and the associated satellites from multiple lines of sight parallel and perpendicular to the horizontal mid-plane for measurements of the profiles of the ion and electron temperatures, plasma rotation velocity, and ionization equilibrium. The spectrometers are consisted of a spherically bent quartz crystal and a 10 cm x 30 cm large 2D position-sensitive multi-wire proportional counter. A 2D detector with delay-line readout and supporting electronics has been fabricated and tested on the NSTX tokamak at PPPL. Position resolution and count rate capability of the 2D detector are still need to be improved to meet the requirements. Hence, a segmented version of the 2D detector is under development to satisfy the requirements. The experimental results from the improved 2D detector will be presented.

  8. Improving Prediction of Prostate Cancer Recurrence using Chemical Imaging

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Kajdacsy-Balla, André; Macias, Virgilia; Walsh, Michael; Sinha, Saurabh; Bhargava, Rohit

    2015-03-01

    Precise Outcome prediction is crucial to providing optimal cancer care across the spectrum of solid cancers. Clinically-useful tools to predict risk of adverse events (metastases, recurrence), however, remain deficient. Here, we report an approach to predict the risk of prostate cancer recurrence, at the time of initial diagnosis, using a combination of emerging chemical imaging, a diagnostic protocol that focuses simultaneously on the tumor and its microenvironment, and data analysis of frequent patterns in molecular expression. Fourier transform infrared (FT-IR) spectroscopic imaging was employed to record the structure and molecular content from tumors prostatectomy. We analyzed data from a patient cohort that is mid-grade dominant - which is the largest cohort of patients in the modern era and in whom prognostic methods are largely ineffective. Our approach outperforms the two widely used tools, Kattan nomogram and CAPRA-S score in a head-to-head comparison for predicting risk of recurrence. Importantly, the approach provides a histologic basis to the prediction that identifies chemical and morphologic features in the tumor microenvironment that is independent of conventional clinical information, opening the door to similar advances in other solid tumors.

  9. Towards production-level cardiac image analysis with grids.

    PubMed

    Maheshwari, Ketan; Glatard, Tristan; Schaerer, Joël; Delhay, Bertrand; Camarasu-Pop, Sorina; Clarysse, Patrick; Montagnat, Johan

    2009-01-01

    Production exploitation of cardiac image analysis tools is hampered by the lack of proper IT infrastructure in health institutions, the non trivial integration of heterogeneous codes in coherent analysis procedures, and the need to achieve complete automation of these methods. HealthGrids are promising technologies to address these difficulties. This paper details how they can be complemented by high level problem solving environments such as workflow managers to improve the performance of applications both in terms of execution time and robustness of results. Two of the most important important cardiac image analysis tasks are considered, namely myocardium segmentation and motion estimation in a 4D sequence. Results are shown on the corresponding pipelines, using two different execution environments on the EGEE grid production infrastructure.

  10. Multispectral image compression methods for improvement of both colorimetric and spectral accuracy

    NASA Astrophysics Data System (ADS)

    Liang, Wei; Zeng, Ping; Xiao, Zhaolin; Xie, Kun

    2016-07-01

    We propose that both colorimetric and spectral distortion in compressed multispectral images can be reduced by a composite model, named OLCP(W)-X (OptimalLeaders_Color clustering-PCA-W weighted-X coding). In the model, first the spectral-colorimetric clustering is designed for sparse equivalent representation by generating spatial basis. Principal component analysis (PCA) is subsequently used in the manipulation of spatial basis for spectral redundancy removal. Then error compensation mechanism is presented to produce predicted difference image, and finally combined with visual characteristic matrix W, and the created image is compressed by traditional multispectral image coding schemes. We introduce four model-based algorithms to explain their validity. The first two algorithms are OLCPWKWS (OLC-PCA-W-KLT-WT-SPIHT) and OLCPKWS, in which Karhunen-Loeve transform, wavelet transform, and set partitioning in hierarchical trees coding are applied for the created image compression. And the latter two methods are OLCPW-JPEG2000-MCT and OLCP-JPEG2000-MCT. Experimental results show that, compared with the corresponding traditional coding, the proposed OLCPW-X schemes can significantly improve the colorimetric accuracy of rebuilding images under various illumination conditions and generally achieve satisfactory peak signal-to-noise ratio under the same compression ratio. And OLCP-X methods could always ensure superior spectrum reconstruction. Furthermore, our model has excellent performance on user interaction.

  11. Compact Microscope Imaging System With Intelligent Controls Improved

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The Compact Microscope Imaging System (CMIS) with intelligent controls is a diagnostic microscope analysis tool with intelligent controls for use in space, industrial, medical, and security applications. This compact miniature microscope, which can perform tasks usually reserved for conventional microscopes, has unique advantages in the fields of microscopy, biomedical research, inline process inspection, and space science. Its unique approach integrates a machine vision technique with an instrumentation and control technique that provides intelligence via the use of adaptive neural networks. The CMIS system was developed at the NASA Glenn Research Center specifically for interface detection used for colloid hard spheres experiments; biological cell detection for patch clamping, cell movement, and tracking; and detection of anode and cathode defects for laboratory samples using microscope technology.

  12. Automated quantitative image analysis of nanoparticle assembly

    NASA Astrophysics Data System (ADS)

    Murthy, Chaitanya R.; Gao, Bo; Tao, Andrea R.; Arya, Gaurav

    2015-05-01

    The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated manner. The software outputs averages and distributions in the size, radius of gyration, fractal dimension, backbone length, end-to-end distance, anisotropic ratio, and aspect ratio of NP clusters as a function of time along with bootstrapped error bounds for all calculated properties. The polydispersity in the NP building blocks and biases in the sampling of NP clusters are accounted for through the use of probabilistic weights. This software, named Particle Image Characterization Tool (PICT), has been made publicly available and could be an invaluable resource for researchers studying NP assembly. To demonstrate its practical utility, we used PICT to analyze scanning electron microscopy images taken during the assembly of surface-functionalized metal NPs of differing shapes and sizes within a polymer matrix. PICT is used to characterize and analyze the morphology of NP clusters, providing quantitative information that can be used to elucidate the physical mechanisms governing NP assembly.The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated

  13. A chaos-based digital image encryption scheme with an improved diffusion strategy.

    PubMed

    Fu, Chong; Chen, Jun-jie; Zou, Hao; Meng, Wei-hong; Zhan, Yong-feng; Yu, Ya-wen

    2012-01-30

    Chaos-based image cipher has been widely investigated over the last decade or so to meet the increasing demand for real-time secure image transmission over public networks. In this paper, an improved diffusion strategy is proposed to promote the efficiency of the most widely investigated permutation-diffusion type image cipher. By using the novel bidirectional diffusion strategy, the spreading process is significantly accelerated and hence the same level of security can be achieved with fewer overall encryption rounds. Moreover, to further enhance the security of the cryptosystem, a plain-text related chaotic orbit turbulence mechanism is introduced in diffusion procedure by perturbing the control parameter of the employed chaotic system according to the cipher-pixel. Extensive cryptanalysis has been performed on the proposed scheme using differential analysis, key space analysis, various statistical analyses and key sensitivity analysis. Results of our analyses indicate that the new scheme has a satisfactory security level with a low computational complexity, which renders it a good candidate for real-time secure image transmission applications.

  14. Developing an efficient technique for satellite image denoising and resolution enhancement for improving classification accuracy

    NASA Astrophysics Data System (ADS)

    Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja

    2013-01-01

    Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.

  15. BioImage Suite: An integrated medical image analysis suite: An update.

    PubMed

    Papademetris, Xenophon; Jackowski, Marcel P; Rajeevan, Nallakkandi; DiStasio, Marcello; Okuda, Hirohito; Constable, R Todd; Staib, Lawrence H

    2006-01-01

    BioImage Suite is an NIH-supported medical image analysis software suite developed at Yale. It leverages both the Visualization Toolkit (VTK) and the Insight Toolkit (ITK) and it includes many additional algorithms for image analysis especially in the areas of segmentation, registration, diffusion weighted image processing and fMRI analysis. BioImage Suite has a user-friendly user interface developed in the Tcl scripting language. A final beta version is freely available for download.

  16. Traking of Laboratory Debris Flow Fronts with Image Analysis

    NASA Astrophysics Data System (ADS)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Fischer, Jan-Thomas; Scheidl, Christian; Pudasaini, Shiva P.

    2015-04-01

    Image analysis technique is applied to track the time evolution of rapid debris flow fronts and their velocities in laboratory experiments. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural slopes. The laboratory model consists of a large rectangular channel 1.4m wide and 10m long, with adjustable inclination and other flow configurations. The setup allows investigate different two phase material compositions including large fluid fractions. The large size enables to transfer the results to large-scale natural events providing increased measurement accuracy. The images are captured by a high speed camera, a standard digital camera. The fronts are tracked by the camera to obtain data in debris flow experiments. The reflectance analysis detects the debris front in every image frame; its presence changes the reflectance at a certain pixel location during the flow. The accuracy of the measurements was improved with a camera calibration procedure. As one of the great problems in imaging and analysis, the systematic distortions of the camera lens are contained in terms of radial and tangential parameters. The calibration procedure estimates the optimal values for these parameters. This allows us to obtain physically correct and undistorted image pixels. Then, we map the images onto a physical model geometry, which is the projective photogrammetry, in which the image coordinates are connected with the object space coordinates of the flow. Finally, the physical model geometry is rewritten in the direct linear transformation form, which allows for the conversion from one to another coordinate system. With our approach, the debris front position can then be estimated by combining the reflectance, calibration and the linear transformation. The consecutive debris front

  17. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  18. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  19. Image analysis for measuring rod network properties

    NASA Astrophysics Data System (ADS)

    Kim, Dongjae; Choi, Jungkyu; Nam, Jaewook

    2015-12-01

    In recent years, metallic nanowires have been attracting significant attention as next-generation flexible transparent conductive films. The performance of films depends on the network structure created by nanowires. Gaining an understanding of their structure, such as connectivity, coverage, and alignment of nanowires, requires the knowledge of individual nanowires inside the microscopic images taken from the film. Although nanowires are flexible up to a certain extent, they are usually depicted as rigid rods in many analysis and computational studies. Herein, we propose a simple and straightforward algorithm based on the filtering in the frequency domain for detecting the rod-shape objects inside binary images. The proposed algorithm uses a specially designed filter in the frequency domain to detect image segments, namely, the connected components aligned in a certain direction. Those components are post-processed to be combined under a given merging rule in a single rod object. In this study, the microscopic properties of the rod networks relevant to the analysis of nanowire networks were measured for investigating the opto-electric performance of transparent conductive films and their alignment distribution, length distribution, and area fraction. To verify and find the optimum parameters for the proposed algorithm, numerical experiments were performed on synthetic images with predefined properties. By selecting proper parameters, the algorithm was used to investigate silver nanowire transparent conductive films fabricated by the dip coating method.

  20. Computerized image analysis of digitized infrared images of breasts from a scanning infrared imaging system

    NASA Astrophysics Data System (ADS)

    Head, Jonathan F.; Lipari, Charles A.; Elliot, Robert L.

    1998-10-01

    Infrared imaging of the breasts has been shown to be of value in risk assessment, detection, diagnosis and prognosis of breast cancer. However, infrared imaging has not been widely accepted for a variety of reasons, including the lack of standardization of the subjective visual analysis method. The subjective nature of the standard visual analysis makes it difficult to achieve equivalent results with different equipment and different interpreters of the infrared patterns of the breasts. Therefore, this study was undertaken to develop more objective analysis methods for infrared images of the breasts by creating objective semiquantitative and quantitative analysis of computer assisted image analysis determined mean temperatures of whole breasts and quadrants of the breasts. When using objective quantitative data on whole breasts (comparing differences in means of left and right breasts), semiquantitative data on quadrants of the breast (determining an index by summation of scores for each quadrant), or summation of quantitative data on quadrants of the breasts there was a decrease in the number of abnormal patterns (positives) in patients being screen for breast cancer and an increases in the number of abnormal patterns (true positives) in the breast cancer patients. It is hoped that the decrease in positives in women being screened for breast cancer will translate into a decrease in the false positives but larger numbers of women with longer follow-up will be needed to clarify this. Also a much larger group of breast cancer patients will need to be studied in order to see if there is a true increase in the percentage of breast cancer patients presenting with abnormal infrared images of the breast with these objective image analysis methods.

  1. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  2. Improved elastic medical image registration using mutual information

    NASA Astrophysics Data System (ADS)

    Ens, Konstantin; Schumacher, Hanno; Franz, Astrid; Fischer, Bernd

    2007-03-01

    One of the future-oriented areas of medical image processing is to develop fast and exact algorithms for image registration. By joining multi-modal images we are able to compensate the disadvantages of one imaging modality with the advantages of another modality. For instance, a Computed Tomography (CT) image containing the anatomy can be combined with metabolic information of a Positron Emission Tomography (PET) image. It is quite conceivable that a patient will not have the same position in both imaging systems. Furthermore some regions for instance in the abdomen can vary in shape and position due to different filling of the rectum. So a multi-modal image registration is needed to calculate a deformation field for one image in order to maximize the similarity between the two images, described by a so-called distance measure. In this work, we present a method to adapt a multi-modal distance measure, here mutual information (MI), with weighting masks. These masks are used to enhance relevant image structures and suppress image regions which otherwise would disturb the registration process. The performance of our method is tested on phantom data and real medical images.

  3. Ultrasound window-modulated compounding Nakagami imaging: Resolution improvement and computational acceleration for liver characterization.

    PubMed

    Ma, Hsiang-Yang; Lin, Ying-Hsiu; Wang, Chiao-Yin; Chen, Chiung-Nien; Ho, Ming-Chih; Tsui, Po-Hsiang

    2016-08-01

    Ultrasound Nakagami imaging is an attractive method for visualizing changes in envelope statistics. Window-modulated compounding (WMC) Nakagami imaging was reported to improve image smoothness. The sliding window technique is typically used for constructing ultrasound parametric and Nakagami images. Using a large window overlap ratio may improve the WMC Nakagami image resolution but reduces computational efficiency. Therefore, the objectives of this study include: (i) exploring the effects of the window overlap ratio on the resolution and smoothness of WMC Nakagami images; (ii) proposing a fast algorithm that is based on the convolution operator (FACO) to accelerate WMC Nakagami imaging. Computer simulations and preliminary clinical tests on liver fibrosis samples (n=48) were performed to validate the FACO-based WMC Nakagami imaging. The results demonstrated that the width of the autocorrelation function and the parameter distribution of the WMC Nakagami image reduce with the increase in the window overlap ratio. One-pixel shifting (i.e., sliding the window on the image data in steps of one pixel for parametric imaging) as the maximum overlap ratio significantly improves the WMC Nakagami image quality. Concurrently, the proposed FACO method combined with a computational platform that optimizes the matrix computation can accelerate WMC Nakagami imaging, allowing the detection of liver fibrosis-induced changes in envelope statistics. FACO-accelerated WMC Nakagami imaging is a new-generation Nakagami imaging technique with an improved image quality and fast computation.

  4. Discriminating enumeration of subseafloor life using automated fluorescent image analysis

    NASA Astrophysics Data System (ADS)

    Morono, Y.; Terada, T.; Masui, N.; Inagaki, F.

    2008-12-01

    Enumeration of microbial cells in marine subsurface sediments has provided fundamental information for understanding the extent of life and deep-biosphere on Earth. The microbial population has been manually evaluated by direct cell count under the microscopy because the recognition of cell-derived fluorescent signals has been extremely difficult. Here, we improved the conventional method by removing the non- specific fluorescent backgrounds and enumerated the cell population in sediments using a newly developed automated microscopic imaging system. Although SYBR Green I is known to specifically bind to the double strand DNA (Lunau et al., 2005), we still observed some SYBR-stainable particulate matters (SYBR-SPAMs) in the heat-sterilized control sediments (450°C, 6h), which assumed to be silicates or mineralized organic matters. Newly developed acid-wash treatments with hydrofluoric acid (HF) followed by image analysis successfully removed these background objects and yielded artifact-free microscopic images. To obtain statistically meaningful fluorescent images, we constructed a computer-assisted automated cell counting system. Given the comparative data set of cell abundance in acid-washed marine sediments evaluated by SYBR Green I- and acridine orange (AO)-stain with and without the image analysis, our protocol could provide the statistically meaningful absolute numbers of discriminating cell-derived fluorescent signals.

  5. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  6. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    including the time for reviewing instructions , searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...thanks go to Pankaj who inspired me in research , to Prasad from whom I have learned so much, and to Ronie and Laureen, the memories of whose company...of images to determine egomotion and to extract information from the scene. Research in motion analysis has been focussed on the problems of

  7. Improved metrology of implant lines on static images of textured silicon wafers using line integral method

    NASA Astrophysics Data System (ADS)

    Shah, Kuldeep; Saber, Eli; Verrier, Kevin

    2015-02-01

    In solar wafer manufacturing processes, the measurement of implant mask wearing over time is important to maintain the quality of wafers and the overall yield. Mask wearing can be estimated by measuring the width of lines implanted by it on the substrate. Previous methods, which propose image analysis methods to detect and measure these lines, have been shown to perform well on polished wafers. Although it is easier to capture images of textured wafers, the contrast between the foreground and background is extremely low. In this paper, an improved technique to detect and measure implant line widths on textured solar wafers is proposed. As a pre-processing step, a fast non-local means method is used to denoise the image due to the presence of repeated patterns of textured lines in the image. Following image enhancement, the previously proposed line integral method is used to extract the position of each line in the image. Full- Width One-Third maximum approximation is then used to estimate the line widths in pixel units. The conversion of these widths into real-world metric units is done using a photogrammetric approach involving the Sampling Distance. The proposed technique is evaluated using real images of textured wafers and compared with the state-of-the-art using identical synthetic images, to which varying amounts of noise was added. Precision, recall and F-measure values are calculated to benchmark the proposed technique. The proposed method is found to be more robust to noise, with critical SNR value reduced by 10dB in comparison to the existing method.

  8. Improved B-spline image registration between exhale and inhale lung CT images based on intensity and gradient orientation information

    NASA Astrophysics Data System (ADS)

    Nam, Woo Hyun; Oh, Jihun; Yi, Jonghyon; Park, Yongsup; Cho, Hansu; Kim, Sukjin

    2016-03-01

    Registration of lung CT images acquired at different respiratory phases is clinically relevant in many applications, such as follow-up analysis, lung function analysis based on mechanical elasticity, or pulmonary airflow analysis, etc. In order to find accurate and reliable transformation for registration, a proper choice of dissimilarity measure is important. Even though various intensity-based measures have been introduced for precise registration, the registration performance may be limited since they mainly take intensity values into account without effectively considering useful spatial information. In this paper, we attempt to improve the non-rigid registration accuracy between exhale and inhale CT images of the lung, by proposing a new dissimilarity measure based on gradient orientation representing the spatial information in addition to vessel-weighted intensity and normalized intensity information. Since it is necessary to develop non-rigid registration that can account for large lung deformations, the B-spline free-form deformation (FFD) is adopted as the transformation model. The experimental tests for six clinical datasets show that the proposed method provides more accurate registration results than competitive registration methods.

  9. Analysis of variance in spectroscopic imaging data from human tissues.

    PubMed

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  10. Tomographic spectral imaging: analysis of localized corrosion.

    SciTech Connect

    Michael, Joseph Richard; Kotula, Paul Gabriel; Keenan, Michael Robert

    2005-02-01

    Microanalysis is typically performed to analyze the near surface of materials. There are many instances where chemical information about the third spatial dimension is essential to the solution of materials analyses. The majority of 3D analyses however focus on limited spectral acquisition and/or analysis. For truly comprehensive 3D chemical characterization, 4D spectral images (a complete spectrum from each volume element of a region of a specimen) are needed. Furthermore, a robust statistical method is needed to extract the maximum amount of chemical information from that extremely large amount of data. In this paper, an example of the acquisition and multivariate statistical analysis of 4D (3-spatial and 1-spectral dimension) x-ray spectral images is described. The method of utilizing a single- or dual-beam FIB (w/o or w/SEM) to get at 3D chemistry has been described by others with respect to secondary-ion mass spectrometry. The basic methodology described in those works has been modified for comprehensive x-ray microanalysis in a dual-beam FIB/SEM (FEI Co. DB-235). In brief, the FIB is used to serially section a site-specific region of a sample and then the electron beam is rastered over the exposed surfaces with x-ray spectral images being acquired at each section. All this is performed without rotating or tilting the specimen between FIB cutting and SEM imaging/x-ray spectral image acquisition. The resultant 4D spectral image is then unfolded (number of volume elements by number of channels) and subjected to the same multivariate curve resolution (MCR) approach that has proven successful for the analysis of lower-dimension x-ray spectral images. The TSI data sets can be in excess of 4Gbytes. This problem has been overcome (for now) and images up to 6Gbytes have been analyzed in this work. The method for analyzing such large spectral images will be described in this presentation. A comprehensive 3D chemical analysis was performed on several corrosion specimens

  11. Microscopy image resolution improvement by deconvolution of complex fields.

    PubMed

    Cotte, Yann; Toy, M Fatih; Pavillon, Nicolas; Depeursinge, Christian

    2010-09-13

    Based on truncated inverse filtering, a theory for deconvolution of complex fields is studied. The validity of the theory is verified by comparing with experimental data from digital holographic microscopy (DHM) using a high-NA system (NA=0.95). Comparison with standard intensity deconvolution reveals that only complex deconvolution deals correctly with coherent cross-talk. With improved image resolution, complex deconvolution is demonstrated to exceed the Rayleigh limit. Gain in resolution arises by accessing the objects complex field - containing the information encoded in the phase - and deconvolving it with the reconstructed complex transfer function (CTF). Synthetic (based on Debye theory modeled with experimental parameters of MO) and experimental amplitude point spread functions (APSF) are used for the CTF reconstruction and compared. Thus, the optical system used for microscopy is characterized quantitatively by its APSF. The role of noise is discussed in the context of complex field deconvolution. As further results, we demonstrate that complex deconvolution does not require any additional optics in the DHM setup while extending the limit of resolution with coherent illumination by a factor of at least 1.64.

  12. Improving GPR image resolution in lossy ground using dispersive migration

    USGS Publications Warehouse

    Oden, C.P.; Powers, M.H.; Wright, D.L.; Olhoeft, G.R.

    2007-01-01

    As a compact wave packet travels through a dispersive medium, it becomes dilated and distorted. As a result, ground-penetrating radar (GPR) surveys over conductive and/or lossy soils often result in poor image resolution. A dispersive migration method is presented that combines an inverse dispersion filter with frequency-domain migration. The method requires a fully characterized GPR system including the antenna response, which is a function of the local soil properties for ground-coupled antennas. The GPR system response spectrum is used to stabilize the inverse dispersion filter. Dispersive migration restores attenuated spectral components when the signal-to-noise ratio is adequate. Applying the algorithm to simulated data shows that the improved spatial resolution is significant when data are acquired with a GPR system having 120 dB or more of dynamic range, and when the medium has a loss tangent of 0.3 or more. Results also show that dispersive migration provides no significant advantage over conventional migration when the loss tangent is less than 0.3, or when using a GPR system with a small dynamic range. ?? 2007 IEEE.

  13. Image analysis from root system pictures

    NASA Astrophysics Data System (ADS)

    Casaroli, D.; Jong van Lier, Q.; Metselaar, K.

    2009-04-01

    Root research has been hampered by a lack of good methods and by the amount of time involved in making measurements. In general the studies from root system are made with either monolith or minirhizotron method which is used as a quantitative tool but requires comparison with conventional destructive methods. This work aimed to analyze roots systems images, obtained from a root atlas book, to different crops in order to find the root length and root length density and correlate them with the literature. Five crops images from Zea mays, Secale cereale, Triticum aestivum, Medicago sativa and Panicum miliaceum were divided in horizontal and vertical layers. Root length distribution was analyzed for horizontal as well as vertical layers. In order to obtain the root length density, a cuboidal volume was supposed to correspond to each part of the image. The results from regression analyses showed root length distributions according to horizontal or vertical layers. It was possible to find the root length distribution for single horizontal layers as a function of vertical layers, and also for single vertical layers as a function of horizontal layers. Regression analysis showed good fits when the root length distributions were grouped in horizontal layers according to the distance from the root center. When root length distributions were grouped according to soil horizons the fits worsened. The resulting root length density estimates were lower than those commonly found in literature, possibly due to (1) the fact that the crop images resulted from single plant situations, while the analyzed field experiments had more than one plant; (2) root overlapping may occur in the field; (3) root experiments, both in the field and image analyses as performed here, are subject to sampling errors; (4) the (hand drawn) images used in this study may have omitted some of the smallest roots.

  14. Image analysis applied to luminescence microscopy

    NASA Astrophysics Data System (ADS)

    Maire, Eric; Lelievre-Berna, Eddy; Fafeur, Veronique; Vandenbunder, Bernard

    1998-04-01

    We have developed a novel approach to study luminescent light emission during migration of living cells by low-light imaging techniques. The equipment consists in an anti-vibration table with a hole for a direct output under the frame of an inverted microscope. The image is directly captured by an ultra low- light level photon-counting camera equipped with an image intensifier coupled by an optical fiber to a CCD sensor. This installation is dedicated to measure in a dynamic manner the effect of SF/HGF (Scatter Factor/Hepatocyte Growth Factor) both on activation of gene promoter elements and on cell motility. Epithelial cells were stably transfected with promoter elements containing Ets transcription factor-binding sites driving a luciferase reporter gene. Luminescent light emitted by individual cells was measured by image analysis. Images of luminescent spots were acquired with a high aperture objective and time exposure of 10 - 30 min in photon-counting mode. The sensitivity of the camera was adjusted to a high value which required the use of a segmentation algorithm dedicated to eliminate the background noise. Hence, image segmentation and treatments by mathematical morphology were particularly indicated in these experimental conditions. In order to estimate the orientation of cells during their migration, we used a dedicated skeleton algorithm applied to the oblong spots of variable intensities emitted by the cells. Kinetic changes of luminescent sources, distance and speed of migration were recorded and then correlated with cellular morphological changes for each spot. Our results highlight the usefulness of the mathematical morphology to quantify kinetic changes in luminescence microscopy.

  15. Analysis on short-range millimetre wave scattering imaging system

    NASA Astrophysics Data System (ADS)

    Zhu, Li; Li, Xing-Guo; Lou, Guo-Wei; Zhang, Chao

    2011-08-01

    A system for short-range millimetre wave(MMW) active imaging was developed, including transceiver antenna, scanning system, transceiver front-end, signal processing. A target within a few meters or even a few centimeters can be imaged. The overall structure of the imaging system and imaging method were researched. The short-range scattering imaging formula was derived from the spectral distribution shift view, which can simplify the method. Phase compensation factor was introduced to improve the imaging resolution. The relationship between the sampling frequency and scanning speed was analyzed to optimize the system parameters, which can improve image quality and system efficiency.

  16. Improved image decompression for reduced transform coding artifacts

    NASA Technical Reports Server (NTRS)

    Orourke, Thomas P.; Stevenson, Robert L.

    1994-01-01

    The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.

  17. On improving the use of OCT imaging for detecting glaucomatous damage

    PubMed Central

    Hood, Donald C; Raza, Ali S

    2014-01-01

    Aims To describe two approaches for improving the detection of glaucomatous damage seen with optical coherence tomography (OCT). Methods The two approaches described were: one, a visual analysis of the high-quality OCT circle scans and two, a comparison of local visual field sensitivity loss to local OCT retinal ganglion cell plus inner plexiform (RGC+) and retinal nerve fibre layer (RNFL) thinning. OCT images were obtained from glaucoma patients and suspects using a spectral domain OCT machine and commercially available scanning protocols. A high-quality peripapillary circle scan (average of 50), a three-dimensional (3D) scan of the optic disc, and a 3D scan of the macula were obtained. RGC+ and RNFL thickness and probability plots were generated from the 3D scans. Results A close visual analysis of a high-quality circle scan can help avoid both false positive and false negative errors. Similarly, to avoid these errors, the location of abnormal visual field points should be compared to regions of abnormal RGC+ and RNFL thickness. Conclusions To improve the sensitivity and specificity of OCT imaging, high-quality images should be visually scrutinised and topographical information from visual fields and OCT scans combined. PMID:24934219

  18. Improving Intelligence Analysis With Decision Science.

    PubMed

    Dhami, Mandeep K; Mandel, David R; Mellers, Barbara A; Tetlock, Philip E

    2015-11-01

    Intelligence analysis plays a vital role in policy decision making. Key functions of intelligence analysis include accurately forecasting significant events, appropriately characterizing the uncertainties inherent in such forecasts, and effectively communicating those probabilistic forecasts to stakeholders. We review decision research on probabilistic forecasting and uncertainty communication, drawing attention to findings that could be used to reform intelligence processes and contribute to more effective intelligence oversight. We recommend that the intelligence community (IC) regularly and quantitatively monitor its forecasting accuracy to better understand how well it is achieving its functions. We also recommend that the IC use decision science to improve these functions (namely, forecasting and communication of intelligence estimates made under conditions of uncertainty). In the case of forecasting, decision research offers suggestions for improvement that involve interventions on data (e.g., transforming forecasts to debias them) and behavior (e.g., via selection, training, and effective team structuring). In the case of uncertainty communication, the literature suggests that current intelligence procedures, which emphasize the use of verbal probabilities, are ineffective. The IC should, therefore, leverage research that points to ways in which verbal probability use may be improved as well as exploring the use of numerical probabilities wherever feasible.

  19. Automated rejection of contaminated surface measurements for improved surface registration in image guided neurosurgery.

    PubMed

    Bucholz, R; Macneil, W; Fewings, P; Ravindra, A; McDurmont, L; Baumann, C

    2000-01-01

    Most image guided Neurosurgery employs adhesively mounted external fiducials for registration of medical images to the surgical workspace. Due to high logistical costs associated with these artificial landmarks, we strive to eliminate the need for these markers. At our institution, we developed a handheld laser stripe triangulation device to capture the surface contours of the patient's head while oriented for surgery. Anatomical surface registration algorithms rely on the assumption that the patient's anatomy bears the same geometry as the 3D model of the patient constructed from the imaging modality employed. During the time interval from which the patient is imaged and placed in the Mayfield head clamp in the operating room, the skin of the head bulges at the pinsite and the skull fixation equipment itself optically interferes with the image capture laser. We have developed software to reject points belonging to objects of known geometry while calculating the registration. During the course of development of the laser scanning unit, we have acquired surface contours of 13 patients and 2 cadavers. Initial analysis revealed that this automated rejection of points improved the registrations in all cases, but the accuracy of the fiducial method was not surpassed. Only points belonging to the offending instrument are removed. Skin bulges caused by the clamps and instruments remain in the data. We anticipate that careful removal of the points in these skin bulges will yield registrations that at least match the accuracy of the fiducial method.

  20. Error analysis of two methods for range-images registration

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang

    2010-08-01

    With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.

  1. Hand-held optical imager (Gen-2): improved instrumentation and target detectability.

    PubMed

    Gonzalez, Jean; Decerce, Joseph; Erickson, Sarah J; Martinez, Sergio L; Nunez, Annie; Roman, Manuela; Traub, Barbara; Flores, Cecilia A; Roberts, Seigbeh M; Hernandez, Estrella; Aguirre, Wenceslao; Kiszonas, Richard; Godavarty, Anuradha

    2012-08-01

    Hand-held optical imagers are developed by various researchers towards reflectance-based spectroscopic imaging of breast cancer. Recently, a Gen-1 handheld optical imager was developed with capabilities to perform two-dimensional (2-D) spectroscopic as well as three-dimensional (3-D) tomographic imaging studies. However, the imager was bulky with poor surface contact (~30%) along curved tissues, and limited sensitivity to detect targets consistently. Herein, a Gen-2 hand-held optical imager that overcame the above limitations of the Gen-1 imager has been developed and the instrumentation described. The Gen-2 hand-held imager is less bulky, portable, and has improved surface contact (~86%) on curved tissues. Additionally, the forked probe head design is capable of simultaneous bilateral reflectance imaging of both breast tissues, and also transillumination imaging of a single breast tissue. Experimental studies were performed on tissue phantoms to demonstrate the improved sensitivity in detecting targets using the Gen-2 imager. The improved instrumentation of the Gen-2 imager allowed detection of targets independent of their location with respect to the illumination points, unlike in Gen-1 imager. The developed imager has potential for future clinical breast imaging with enhanced sensitivity, via both reflectance and transillumination imaging.

  2. An Analysis of Web Image Queries for Search.

    ERIC Educational Resources Information Center

    Pu, Hsiao-Tieh

    2003-01-01

    Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)

  3. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  4. Hyperspectral imaging technology for pharmaceutical analysis

    NASA Astrophysics Data System (ADS)

    Hamilton, Sara J.; Lodder, Robert A.

    2002-06-01

    The sensitivity and spatial resolution of hyperspectral imaging instruments are tested in this paper using pharmaceutical applications. The first experiment tested the hypothesis that a near-IR tunable diode-based remote sensing system is capable of monitoring degradation of hard gelatin capsules at a relatively long distance. Spectra from the capsules were used to differentiate among capsules exposed to an atmosphere containing imaging spectrometry of tablets permits the identification and composition of multiple individual tables to be determined simultaneously. A near-IR camera was used to collect thousands of spectra simultaneously from a field of blister-packaged tablets. The number of tablets that a typical near-IR camera can currently analyze simultaneously form a field of blister- packaged tablets. The number of tablets that a typical near- IR camera can currently analyze simultaneously was estimated to be approximately 1300. The bootstrap error-adjusted single-sample technique chemometric-imaging algorithm was used to draw probability-density contour plots that revealed tablet composition. The single-capsule analysis provides an indication of how far apart the sample and instrumentation can be and still maintain adequate S/N, while the multiple- sample imaging experiment gives an indication of how many samples can be analyzed simultaneously while maintaining an adequate S/N and pixel coverage on each sample.

  5. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  6. Improving information retrieval in functional analysis.

    PubMed

    Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A

    2016-12-01

    Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities.

  7. Self-assessed performance improves statistical fusion of image labels

    PubMed Central

    Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-01-01

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  8. Self-assessed performance improves statistical fusion of image labels

    SciTech Connect

    Bryan, Frederick W. Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.

    2014-03-15

    Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance

  9. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  10. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession.

  11. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    SciTech Connect

    Drukker, Karen Giger, Maryellen L.; Li, Hui; Duewer, Fred; Malkov, Serghei; Joe, Bonnie; Kerlikowske, Karla; Shepherd, John A.; Flowers, Chris I.; Drukteinis, Jennifer S.

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CB alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.

  12. Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.

    2015-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and

  13. Bayesian Analysis Of HMI Solar Image Observables And Comparison To TSI Variations And MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.

    2014-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and

  14. Exploring new quantitative CT image features to improve assessment of lung cancer prognosis

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Qian, Wei; Kang, Yan; Guan, Yubao; Lure, Fleming; Zheng, Bin

    2015-03-01

    Due to the promotion of lung cancer screening, more Stage I non-small-cell lung cancers (NSCLC) are currently detected, which usually have favorable prognosis. However, a high percentage of the patients have cancer recurrence after surgery, which reduces overall survival rate. To achieve optimal efficacy of treating and managing Stage I NSCLC patients, it is important to develop more accurate and reliable biomarkers or tools to predict cancer prognosis. The purpose of this study is to investigate a new quantitative image analysis method to predict the risk of lung cancer recurrence of Stage I NSCLC patients after the lung cancer surgery using the conventional chest computed tomography (CT) images and compare the prediction result with a popular genetic biomarker namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes. In this study, we developed and tested a new computer-aided detection (CAD) scheme to segment lung tumors and initially compute 35 tumor-related morphologic and texture features from CT images. By applying a machine learning based feature selection method, we identified a set of 8 effective and non-redundant image features. Using these features we trained a naïve Bayesian network based classifier to predict the risk of cancer recurrence. When applying to a test dataset with 79 Stage I NSCLC cases, the computed areas under ROC curves were 0.77±0.06 and 0.63±0.07 when using the quantitative image based classifier and ERCC1, respectively. The study results demonstrated the feasibility of improving accuracy of predicting cancer prognosis or recurrence risk using a CAD-based quantitative image analysis method.

  15. Analysis on enhanced depth of field for integral imaging microscope.

    PubMed

    Lim, Young-Tae; Park, Jae-Hyeung; Kwon, Ki-Chul; Kim, Nam

    2012-10-08

    Depth of field of the integral imaging microscope is studied. In the integral imaging microscope, 3-D information is encoded as a form of elemental images Distance between intermediate plane and object point decides the number of elemental image and depth of field of integral imaging microscope. From the analysis, it is found that depth of field of the reconstructed depth plane image by computational integral imaging reconstruction is longer than depth of field of optical microscope. From analyzed relationship, experiment using integral imaging microscopy and conventional microscopy is also performed to confirm enhanced depth of field of integral imaging microscopy.

  16. Practical guidelines for radiographers to improve computed radiography image quality.

    PubMed

    Pongnapang, N

    2005-10-01

    Computed Radiography (CR) has become a major digital imaging modality in a modern radiological department. CR system changes workflow from the conventional way of using film/screen by employing photostimulable phosphor plate technology. This results in the changing perspectives of technical, artefacts and quality control issues in radiology departments. Guidelines for better image quality in digital medical enterprise include professional guidelines for users and the quality control programme specifically designed to serve the best quality of clinical images. Radiographers who understand technological shift of the CR from conventional method can employ optimization of CR images. Proper anatomic collimation and exposure techniques for each radiographic projection are crucial steps in producing quality digital images. Matching image processing with specific anatomy is also important factor that radiographers should realise. Successful shift from conventional to fully digitised radiology department requires skilful radiographers who utilise the technology and a successful quality control program from teamwork in the department.

  17. Stem Cell Imaging: Tools to Improve Cell Delivery and Viability

    PubMed Central

    Wang, Junxin; Jokerst, Jesse V.

    2016-01-01

    Stem cell therapy (SCT) has shown very promising preclinical results in a variety of regenerative medicine applications. Nevertheless, the complete utility of this technology remains unrealized. Imaging is a potent tool used in multiple stages of SCT and this review describes the role that imaging plays in cell harvest, cell purification, and cell implantation, as well as a discussion of how imaging can be used to assess outcome in SCT. We close with some perspective on potential growth in the field. PMID:26880997

  18. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  19. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  20. Machine Learning Interface for Medical Image Analysis.

    PubMed

    Zhang, Yi C; Kagen, Alexander C

    2016-10-11

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  1. Three-dimensional freehand ultrasound: image reconstruction and volume analysis.

    PubMed

    Barry, C D; Allott, C P; John, N W; Mellor, P M; Arundel, P A; Thomson, D S; Waterton, J C

    1997-01-01

    A system is described that rapidly produces a regular 3-dimensional (3-D) data block suitable for processing by conventional image analysis and volume measurement software. The system uses electromagnetic spatial location of 2-dimensional (2-D) freehand-scanned ultrasound B-mode images, custom-built signal-conditioning hardware, UNIX-based computer processing and an efficient 3-D reconstruction algorithm. Utilisation of images from multiple angles of insonation, "compounding," reduces speckle contrast, improves structure coherence within the reconstructed grey-scale image and enhances the ability to detect structure boundaries and to segment and quantify features. Volume measurements using a series of water-filled latex and cylindrical foam rubber phantoms with volumes down to 0.7 mL show that a high degree of accuracy, precision and reproducibility can be obtained. Extension of the technique to handle in vivo data sets by allowing physiological criteria to be taken into account in selecting the images used for construction is also illustrated.

  2. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  3. Learning from Monet: A Fundamentally New Approach to Image Analysis

    NASA Astrophysics Data System (ADS)

    Falco, Charles M.

    2009-03-01

    The hands and minds of artists are intimately involved in the creative process, intrinsically making paintings complex images to analyze. In spite of this difficulty, several years ago the painter David Hockney and I identified optical evidence within a number of paintings that demonstrated artists as early as Jan van Eyck (c1425) used optical projections as aids for producing portions of their images. In the course of making those discoveries, Hockney and I developed new insights that are now being applied in a fundamentally new approach to image analysis. Very recent results from this new approach include identifying from Impressionist paintings by Monet, Pissarro, Renoir and others the precise locations the artists stood when making a number of their paintings. The specific deviations we find when accurately comparing these examples with photographs taken from the same locations provide us with key insights into what the artists' visual skills informed them were the ways to represent these two-dimensional images of three-dimensional scenes to viewers. As will be discussed, these results also have implications for improving the representation of certain scientific data. Acknowledgment: I am grateful to David Hockney for the many invaluable insights into imaging gained from him in our collaboration.

  4. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  5. Improved structural similarity metric for the visible quality measurement of images

    NASA Astrophysics Data System (ADS)

    Lee, Daeho; Lim, Sungsoo

    2016-11-01

    The visible quality assessment of images is important to evaluate the performance of image processing methods such as image correction, compressing, and enhancement. The structural similarity is widely used to determine the visible quality; however, existing structural similarity metrics cannot correctly assess the perceived human visibility of images that have been slightly geometrically transformed or images that have undergone significant regional distortion. We propose an improved structural similarity metric that is more close to human visible evaluation. Compared with the existing metrics, the proposed method can more correctly evaluate the similarity between an original image and various distorted images.

  6. A novel chaotic map and an improved chaos-based image encryption scheme.

    PubMed

    Zhang, Xianhan; Cao, Yang

    2014-01-01

    In this paper, we present a novel approach to create the new chaotic map and propose an improved image encryption scheme based on it. Compared with traditional classic one-dimensional chaotic maps like Logistic Map and Tent Map, this newly created chaotic map demonstrates many better chaotic properties for encryption, implied by a much larger maximal Lyapunov exponent. Furthermore, the new chaotic map and Arnold's Cat Map based image encryption method is designed and proved to be of solid robustness. The simulation results and security analysis indicate that such method not only can meet the requirement of imagine encryption, but also can result in a preferable effectiveness and security, which is usable for general applications.

  7. A Novel Chaotic Map and an Improved Chaos-Based Image Encryption Scheme

    PubMed Central

    2014-01-01

    In this paper, we present a novel approach to create the new chaotic map and propose an improved image encryption scheme based on it. Compared with traditional classic one-dimensional chaotic maps like Logistic Map and Tent Map, this newly created chaotic map demonstrates many better chaotic properties for encryption, implied by a much larger maximal Lyapunov exponent. Furthermore, the new chaotic map and Arnold's Cat Map based image encryption method is designed and proved to be of solid robustness. The simulation results and security analysis indicate that such method not only can meet the requirement of imagine encryption, but also can result in a preferable effectiveness and security, which is usable for general applications. PMID:25143990

  8. Remote-sensing image classification based on an improved probabilistic neural network.

    PubMed

    Zhang, Yudong; Wu, Lenan; Neggaz, Nabil; Wang, Shuihua; Wei, Geng

    2009-01-01

    This paper proposes a hybrid classifier for polarimetric SAR images. The feature sets consist of span image, the H/A/α decomposition, and the GLCM-based texture features. Then, a probabilistic neural network (PNN) was adopted for classification, and a novel algorithm proposed to enhance its performance. Principle component analysis (PCA) was chosen to reduce feature dimensions, random division to reduce the number of neurons, and Brent's search (BS) to find the optimal bias values. The results on San Francisco and Flevoland sites are compared to that using a 3-layer BPNN to demonstrate the validity of our algorithm in terms of confusion matrix and overall accuracy. In addition, the importance of each improvement of the algorithm was proven.

  9. Remote-Sensing Image Classification Based on an Improved Probabilistic Neural Network

    PubMed Central

    Zhang, Yudong; Wu, Lenan; Neggaz, Nabil; Wang, Shuihua; Wei, Geng

    2009-01-01

    This paper proposes a hybrid classifier for polarimetric SAR images. The feature sets consist of span image, the H/A/α decomposition, and the GLCM-based texture features. Then, a probabilistic neural network (PNN) was adopted for classification, and a novel algorithm proposed to enhance its performance. Principle component analysis (PCA) was chosen to reduce feature dimensions, random division to reduce the number of neurons, and Brent’s search (BS) to find the optimal bias values. The results on San Francisco and Flevoland sites are compared to that using a 3-layer BPNN to demonstrate the validity of our algorithm in terms of confusion matrix and overall accuracy. In addition, the importance of each improvement of the algorithm was proven. PMID:22400006

  10. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  11. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  12. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-10-26

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks.

  13. Improving the Performance of Three-Mirror Imaging Systems with Freeform Optics

    NASA Technical Reports Server (NTRS)

    Howard, Joseph M.; Wolbach, Steven

    2013-01-01

    The image quality improvement for three-mirror systems by Freeform Optics is surveyed over various f-number and field specifications. Starting with the Korsch solution, we increase the surface shape degrees of freedom and record the improvements.

  14. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  15. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  16. Theoretical foundations for joint digital-optical analysis of electro-optical imaging systems.

    PubMed

    Stork, David G; Robinson, M Dirk

    2008-04-01

    We describe the mathematical and conceptual foundations for a novel methodology for jointly optimizing the design and analysis of the optics, detector, and digital image processing for imaging systems. Our methodology is based on the end-to-end merit function of predicted average pixel sum-squared error to find the optical and image processing parameters that minimize this merit function. Our approach offers several advantages over the traditional principles of optical design, such as improved imaging performance, expanded operating capabilities, and improved as-built performance.

  17. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  18. Computerized PET/CT image analysis in the evaluation of tumour response to therapy

    PubMed Central

    Wang, J; Zhang, H H

    2015-01-01

    Current cancer therapy strategy is mostly population based, however, there are large differences in tumour response among patients. It is therefore important for treating physicians to know individual tumour response. In recent years, many studies proposed the use of computerized positron emission tomography/CT image analysis in the evaluation of tumour response. Results showed that computerized analysis overcame some major limitations of current qualitative and semiquantitative analysis and led to improved accuracy. In this review, we summarize these studies in four steps of the analysis: image registration, tumour segmentation, image feature extraction and response evaluation. Future works are proposed and challenges described. PMID:25723599

  19. Evaluation of improvement of diffuse optical imaging of brain function by high-density probe arrangements and imaging algorithms

    NASA Astrophysics Data System (ADS)

    Sakakibara, Yusuke; Kurihara, Kazuki; Okada, Eiji

    2016-04-01

    Diffuse optical imaging has been applied to measure the localized hemodynamic responses to brain activation. One of the serious problems with diffuse optical imaging is the limitation of the spatial resolution caused by the sparse probe arrangement and broadened spatial sensitivity profile for each probe pair. High-density probe arrangements and an image reconstruction algorithm considering the broadening of the spatial sensitivity can improve the spatial resolution of the image. In this study, the diffuse optical imaging of the absorption change in the brain is simulated to evaluate the effect of the high-density probe arrangements and imaging methods. The localization error, equivalent full-width half maximum and circularity of the absorption change in the image obtained by the mapping and reconstruction methods from the data measured by five probe arrangements are compared to quantitatively evaluate the imaging methods and probe arrangements. The simple mapping method is sufficient for the density of the measurement points up to the double-density probe arrangement. The image reconstruction method considering the broadening of the spatial sensitivity of the probe pairs can effectively improve the spatial resolution of the image obtained from the probe arrangements higher than the quadruple density, in which the distance between the neighboring measurement points is 10.6 mm.

  20. Depth resolution improvement of streak tube imaging lidar using optimal signal width

    NASA Astrophysics Data System (ADS)

    Ye, Guangchao; Fan, Rongwei; Lu, Wei; Dong, Zhiwei; Li, Xudong; He, Ping; Chen, Deying

    2016-10-01

    Streak tube imaging lidar (STIL) is an active imaging system that has a high depth resolution with the use of a pulsed laser transmitter and streak tube receiver to produce three-dimensional (3-D) range images. This work investigates the optimal signal width of the lidar system, which is helpful to improve the depth resolution based on the centroid algorithm. Theoretical analysis indicates that the signal width has a significant effect on the depth resolution and the optimal signal width can be determined for a given STIL system, which is verified by both the simulation and experimental results. An indoor experiment with a planar target was carried out to validate the relation that the range error decreases first and then increases with the signal width, resulting in an optimal signal width of 8.6 pixels. Finer 3-D range images of a cartoon model were acquired by using the optimal signal width and a minimum range error of 5.5 mm was achieved in a daylight environment.

  1. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study

    PubMed Central

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  2. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.

    PubMed

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-09-19

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images.

  3. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  4. Encoder: A Connectionist Model of How Learning to Visually Encode Fixated Text Images Improves Reading Fluency

    ERIC Educational Resources Information Center

    Martin, Gale L.

    2004-01-01

    This article proposes that visual encoding learning improves reading fluency by widening the span over which letters are recognized from a fixated text image so that fewer fixations are needed to cover a text line. Encoder is a connectionist model that learns to convert images like the fixated text images human readers encode into the…

  5. Nonlinear analysis for image stabilization in IR imaging system

    NASA Astrophysics Data System (ADS)

    Xie, Zhan-lei; Lu, Jin; Luo, Yong-hong; Zhang, Mei-sheng

    2009-07-01

    In order to acquire stabilization image for IR imaging system, an image stabilization system is required. Linear method is often used in current research on the system and a simple PID controller can meet the demands of common users. In fact, image stabilization system is a structure with nonlinear characters such as structural errors, friction and disturbances. In up-grade IR imaging system, although conventional PID controller is optimally designed, it cannot meet the demands of higher accuracy and fast responding speed when disturbances are present. To get high-quality stabilization image, nonlinear characters should be rejected. The friction and gear clearance are key factors and play an important role in the image stabilization system. The friction induces static error of system. When the system runs at low speed, stick-slip and creeping induced by friction not only decrease resolution and repeating accuracy, but also increase the tracking error and the steady state error. The accuracy of the system is also limited by gear clearance, and selfexcited vibration is brought on by serious clearance. In this paper, effects of different nonlinear on image stabilization precision are analyzed, including friction and gear clearance. After analyzing the characters and influence principle of the friction and gear clearance, a friction model is established with MATLAB Simulink toolbox, which is composed of static friction, Coulomb friction and viscous friction, and the gear clearance non-linearity model is built, providing theoretical basis for the future engineering practice.

  6. Percent area coverage through image analysis

    NASA Astrophysics Data System (ADS)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  7. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  8. Image stretching on a curved surface to improve satellite gridding

    NASA Technical Reports Server (NTRS)

    Ormsby, J. P.

    1975-01-01

    A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.

  9. Toward Improved Hyperspectral Analysis in Semiarid Systems

    NASA Astrophysics Data System (ADS)

    Glenn, N. F.; Mitchell, J.

    2012-12-01

    Idaho State University's Boise Center Aerospace Laboratory (BCAL) has processed and applied hyperspectral data for a variety of biophysical sciences in semiarid systems over the past 10 years. HyMap hyperspectral data have been used in most of these studies, along with AVIRIS, CASI, and PIKA-II data. Our studies began with the detection of individual weed species, such as leafy spurge, corroborated with extensive field analysis, including spectrometer data. Early contributions to the field of hyperspectral analysis included the use of: time-series datasets and classification threshold methods for target detection, and subpixel analysis for characterizing weed invasions and post-fire vegetation and soil conditions. Subsequent studies optimized subpixel unmixing performance using spectral subsetting and vegetation abundance investigations. More recent studies have extended the application of hyperspectral data from individual plant species detection to identification of biochemical constituents. We demonstrated field and airborne hyperspectral Nitrogen absorption in sagebrush using combinations of data reduction and spectral transformation techniques (i.e., continuum removal, derivative analysis, partial least squares regression). In spite of these and many other successful demonstrations, gaps still exist in effective species level discrimination due to the high complexity of soil and nonlinear mixing in semiarid shrubland. BCAL studies are currently focusing on complimenting narrowband vegetation indices with LiDAR (light detection and ranging, both airborne and ground-based) derivatives to improve vegetation cover predictions. Future combinations of LiDAR and hyperspectral data will involve exploring the full range spectral information and serve as an integral step in scaling shrub biomass estimates from plot to landscape and regional scales.

  10. Temporal image stacking for noise reduction and dynamic range improvement

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Nash, James; Goma, Sergio; Ramachandra, Vikas; Siddiqui, Hasib

    2013-03-01

    The dynamic range of an imager is determined by the ratio of the pixel well capacity to the noise floor. As the scene dynamic range becomes larger than the imager dynamic range, the choices are to saturate some parts of the scene or "bury" others in noise. In this paper we propose an algorithm that produces high dynamic range images by "stacking" sequentially captured frames which reduces the noise and creates additional bits. The frame stacking is done by frame alignment subject to a projective transform and temporal anisotropic diffusion. The noise sources contributing to the noise floor are the sensor heat noise, the quantization noise, and the sensor fixed pattern noise. We demonstrate that by stacking images the quantization and heat noise are reduced and the decrease is limited only by the fixed pattern noise. As the noise is reduced, the resulting cleaner image enables the use of adaptive tone mapping algorithms which render HDR images in an 8-bit container without significant noise increase.

  11. Quantifying turbulence microstructure for improvement of underwater imaging

    NASA Astrophysics Data System (ADS)

    Woods, Sarah; Hou, Weilin; Goode, Wesley; Jarosz, Ewa; Weidemann, Alan

    2011-06-01

    Enhancing visibility through scattering media is important in many fields for gaining information from the scattering medium. In the ocean, in particular, enhancement of imaging and visibility is important for divers, navigation, robotics, and target and mine detection and classification. Light scattering from particulates and turbulence in the ocean strongly affects underwater visibility. The magnitude of this degrading effect depends upon the underwater environment, and can rapidly degrade the quality of underwater imaging under certain conditions. To facilitate study of the impact of turbulence upon underwater imaging and to check against our previously developed model, quantified observation of the image degradation concurrent with characterization of the turbulent flow is necessary, spanning a variety of turbulent strengths. Therefore, we present field measurements of turbulence microstructure from the July 2010 Skaneateles Optical Turbulence Exercise (SOTEX), during which images of a target were collected over a 5 m path length at various depths in the water column, concurrent with profiles of the turbulent strength, optical properties, temperature, and conductivity. Turbulence was characterized by the turbulent kinetic energy dissipation (TKED) and thermal dissipation (TD) rates, which were obtained using both a Rockland Scientific Vertical Microstructure Profiler (VMP) and a Nortek Vector velocimeter in combination with a PME CT sensor. While the two instrumental setups demonstrate reasonable agreement, some irregularities highlight the spatial and temporal variability of the turbulence field. Supplementary measurements with the Vector/CT in a controlled laboratory convective tank will shed additional light on the quantitative relationship between image degradation and turbulence strength.

  12. Superfast robust digital image correlation analysis with parallel computing

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Tian, Long

    2015-03-01

    Existing digital image correlation (DIC) using the robust reliability-guided displacement tracking (RGDT) strategy for full-field displacement measurement is a path-dependent process that can only be executed sequentially. This path-dependent tracking strategy not only limits the potential of DIC for further improvement of its computational efficiency but also wastes the parallel computing power of modern computers with multicore processors. To maintain the robustness of the existing RGDT strategy and to overcome its deficiency, an improved RGDT strategy using a two-section tracking scheme is proposed. In the improved RGDT strategy, the calculated points with correlation coefficients higher than a preset threshold are all taken as reliably computed points and given the same priority to extend the correlation analysis to their neighbors. Thus, DIC calculation is first executed in parallel at multiple points by separate independent threads. Then for the few calculated points with correlation coefficients smaller than the threshold, DIC analysis using existing RGDT strategy is adopted. Benefiting from the improved RGDT strategy and the multithread computing, superfast DIC analysis can be accomplished without sacrificing its robustness and accuracy. Experimental results show that the presented parallel DIC method performed on a common eight-core laptop can achieve about a 7 times speedup.

  13. Improvement of Rocket Engine Plume Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Smith, S. D.

    1982-01-01

    A nozzle plume flow field code was developed. The RAMP code which was chosen as the basic code is of modular construction and has the following capabilities: two phase with two phase transonic solution; a two phase, reacting gas (chemical equilibrium reaction kinetics), supersonic inviscid nozzle/plume solution; and is operational for inviscid solutions at both high and low altitudes. The following capabilities were added to the code: a direct interface with JANNAF SPF code; shock capturing finite difference numerical operator; two phase, equilibrium/frozen, boundary layer analysis; a variable oxidizer to fuel ratio transonic solution; an improved two phase transonic solution; and a two phase real gas semiempirical nozzle boundary layer expansion.

  14. Improved information analysis - views and actions

    SciTech Connect

    Sheely, K.B.; Manatt, D.R.

    1995-07-01

    The IAEA continues to assess, develop, and test recommendations for strengthening the cost effectiveness of safeguards. The IAEA`s investigation has focused on ways to increase Agency access to data, including access to sites and site data, export/import data, environmental data, open-source data, and other expanded data sources. Although the acquisition of this raw data is essential to strengthening safeguards, the effectiveness of the system is going to be judged by what the Agency does with the data once they have acquired it. Therefore, the IAEA must have the capability to organize, analyze and present the data in a timely manner for internal management evaluation and external dissemination. The United States Department of Energy has established a Safeguards Information Management Systems (SIMS) Initiative to provide support and equipment which will improve the IAEA`s capability to utilize this expanded data to analyze State`s nuclear activities. This paper will present views on steps to improve information analysis and discuss the status of actions undertaken by the SIMS initiative.

  15. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  16. High speed image correlation for vibration analysis

    NASA Astrophysics Data System (ADS)

    Siebert, T.; Wood, R.; Splitthof, K.

    2009-08-01

    Digital speckle correlation techniques have already been successfully proven to be an accurate displacement analysis tool for a wide range of applications. With the use of two cameras, three dimensional measurements of contours and displacements can be carried out. With a simple setup it opens a wide range of applications. Rapid new developments in the field of digital imaging and computer technology opens further applications for these measurement methods to high speed deformation and strain analysis, e.g. in the fields of material testing, fracture mechanics, advanced materials and component testing. The high resolution of the deformation measurements in space and time opens a wide range of applications for vibration analysis of objects. Since the system determines the absolute position and displacements of the object in space, it is capable of measuring high amplitudes and even objects with rigid body movements. The absolute resolution depends on the field of view and is scalable. Calibration of the optical setup is a crucial point which will be discussed in detail. Examples of the analysis of harmonic vibration and transient events from material research and industrial applications are presented. The results show typical features of the system.

  17. Arranging optical fibres for the spatial resolution improvement of topographical images

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tsuyoshi; Maki, Atsushi; Kadoya, Takuma; Tanikawa, Yukari; Yamada, Yukio; Okada, Eiji; Koizumi, Hideaki

    2002-09-01

    Optical topography is a method for visualization of cortical activity. Ways of improving the spatial resolution of the topographical image with three arrangements of optical fibres are discussed. A distribution of sensitivity is obtained from the phantom experiment, and used to reconstruct topographical images of an activation area of the brain with the fibres in each arrangement. The correlations between the activated area and the corresponding topographical images are obtained, and the effective arrangement of the optical fibres for improved resolution is discussed.

  18. Cellular Image Analysis and Imaging by Flow Cytometry

    PubMed Central

    Basiji, David A.; Ortyn, William E.; Liang, Luchuan; Venkatachalam, Vidya; Morrissey, Philip

    2007-01-01

    Synopsis Imaging flow cytometry combines the statistical power and fluorescence sensitivity of standard flow cytometry with the spatial resolution and quantitative morphology of digital microscopy. The technique is a good fit for clinical applications by providing a convenient means for imaging and analyzing cells directly in bodily fluids. Examples are provided of the discrimination of cancerous from normal mammary epithelial cells and the high throughput quantitation of FISH probes in human peripheral blood mononuclear cells. The FISH application will be further enhanced by the integration of extended depth of field imaging technology with the current optical system. PMID:17658411

  19. Improved discrimination among similar agricultural plots using red-and-green-based pseudo-colour imaging

    NASA Astrophysics Data System (ADS)

    Doi, Ryoichi

    2016-04-01

    The effects of a pseudo-colour imaging method were investigated by discriminating among similar agricultural plots in remote sensing images acquired using the Airborne Visible/Infrared Imaging Spectrometer (Indiana, USA) and the Landsat 7 satellite (Fergana, Uzbekistan), and that provided by GoogleEarth (Toyama, Japan). From each dataset, red (R)-green (G)-R-G-blue yellow (RGrgbyB), and RGrgby-1B pseudo-colour images were prepared. From each, cyan, magenta, yellow, key black, L*, a*, and b* derivative grayscale images were generated. In the Airborne Visible/Infrared Imaging Spectrometer image, pixels were selected for corn no tillage (29 pixels), corn minimum tillage (27), and soybean (34) plots. Likewise, in the Landsat 7 image, pixels representing corn (73 pixels), cotton (110), and wheat (112) plots were selected, and in the GoogleEarth image, those representing soybean (118 pixels) and rice (151) were selected. When the 14 derivative grayscale images were used together with an RGB yellow grayscale image, the overall classification accuracy improved from 74 to 94% (Airborne Visible/Infrared Imaging Spectrometer), 64 to 83% (Landsat), or 77 to 90% (GoogleEarth). As an indicator of discriminatory power, the kappa significance improved 1018-fold (Airborne Visible/Infrared Imaging Spectrometer) or greater. The derivative grayscale images were found to increase the dimensionality and quantity of data. Herein, the details of the increases in dimensionality and quantity are further analysed and discussed.

  20. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    NASA Astrophysics Data System (ADS)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  1. An improved brain image classification technique with mining and shape prior segmentation procedure.

    PubMed

    Rajendran, P; Madheswaran, M

    2012-04-01

    The shape prior segmentation procedure and pruned association rule with ImageApriori algorithm has been used to develop an improved brain image classification system are presented in this paper. The CT scan brain images have been classified into three categories namely normal, benign and malignant, considering the low-level features extracted from the images and high level knowledge from specialists to enhance the accuracy in decision process. The experimental results on pre-diagnosed brain images showed 97% sensitivity, 91% specificity and 98.5% accuracy. The proposed algorithm is expected to assist the physicians for efficient classification with multiple key features per image.

  2. Image quality and high contrast improvements on VLT/NACO

    NASA Astrophysics Data System (ADS)

    Girard, Julien H. V.; O'Neal, Jared; Mawet, Dimitri; Kasper, Markus; Zins, Gérard; Neichel, Benoît; Kolb, Johann; Christiaens, Valentin; Tourneboeuf, Martin

    2012-07-01

    NACO is the famous and versatile diffraction limited NIR imager and spectrograph at the VLT with which ESO celebrated 10 years of Adaptive Optics. Since two years a substantial effort has been put in understanding and fixing issues that directly affect the image quality and the high contrast performances of the instrument. Experiments to compensate the non-common-path aberrations and recover the highest possible Strehl ratios have been carried out successfully and a plan is hereafter described to perform such measurements regularly. The drift associated to pupil tracking since 2007 was fixed in october 2011. NACO is therefore even more suited for high contrast imaging and can be used with coronagraphic masks in the image plane. Some contrast measurements are shown and discussed. The work accomplished on NACO will serve as reference for the next generation instruments on the VLT, especially the ones working at the diffraction limit and making use of angular differential imaging (i.e. SPHERE, VISIR, and possibly ERIS).

  3. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  4. Improving the image and quantitative data of magnetic resonance imaging through hardware and physics techniques

    NASA Astrophysics Data System (ADS)

    Kaggie, Joshua D.

    In Chapter 1, an introduction to basic principles or MRI is given, including the physical principles, basic pulse sequences, and basic hardware. Following the introduction, five different published and yet unpublished papers for improving the utility of MRI are shown. Chapter 2 discusses a small rodent imaging system that was developed for a clinical 3 T MRI scanner. The system integrated specialized radiofrequency (RF) coils with an insertable gradient, enabling 100 microm isotropic resolution imaging of the guinea pig cochlea in vivo, doubling the body gradient strength, slew rate, and contrast-to-noise ratio, and resulting in twice the signal-to-noise (SNR) when compared to the smallest conforming birdcage. Chapter 3 discusses a system using BOLD MRI to measure T2* and invasive fiberoptic probes to measure renal oxygenation (pO2). The significance of this experiment is that it demonstrated previously unknown physiological effects on pO2, such as breath-holds that had an immediate (<1 sec) pO2 decrease (˜6 mmHg), and bladder pressure that had pO2 increases (˜6 mmHg). Chapter 4 determined the correlation between indicators of renal health and renal fat content. The R2 correlation between renal fat content and eGFR, serum cystatin C, urine protein, and BMI was less than 0.03, with a sample size of ˜100 subjects, suggesting that renal fat content will not be a useful indicator of renal health. Chapter 5 is a hardware and pulse sequence technique for acquiring multinuclear 1H and 23Na data within the same pulse sequence. Our system demonstrated a very simple, inexpensive solution to SMI and acquired both nuclei on two 23Na channels using external modifications, and is the first demonstration of radially acquired SMI. Chapter 6 discusses a composite sodium and proton breast array that demonstrated a 2-5x improvement in sodium SNR and similar proton SNR when compared to a large coil with a linear sodium and linear proton channel. This coil is unique in that sodium

  5. Improved Phased Array Imaging of a Model Jet

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Podboy, Gary G.

    2010-01-01

    An advanced phased array system, OptiNav Array 48, and a new deconvolution algorithm, TIDY, have been used to make octave band images of supersonic and subsonic jet noise produced by the NASA Glenn Small Hot Jet Acoustic Rig (SHJAR). The results are much more detailed than previous jet noise images. Shock cell structures and the production of screech in an underexpanded supersonic jet are observed directly. Some trends are similar to observations using spherical and elliptic mirrors that partially informed the two-source model of jet noise, but the radial distribution of high frequency noise near the nozzle appears to differ from expectations of this model. The beamforming approach has been validated by agreement between the integrated image results and the conventional microphone data.

  6. Precision Improvement of Photogrammetry by Digital Image Correlation

    NASA Astrophysics Data System (ADS)

    Shih, Ming-Hsiang; Sung, Wen-Pei; Tung, Shih-Heng; Hsiao, Hanwei

    2016-04-01

    The combination of aerial triangulation technology and unmanned aerial vehicle greatly reduces the cost and application threshold of the digital surface model technique. Based on the report in the literatures, the measurement error in the x-y coordinate and in the elevation lies between 8cm~15cm and 10cm~20cm respectively. The measurement accuracy for the geological structure survey already has sufficient value, but for the slope and structures in terms of deformation monitoring is inadequate. The main factors affecting the accuracy of the aerial triangulation are image quality, measurement accuracy of control point and image matching accuracy. In terms of image matching, the commonly used techniques are Harris Corner Detection and Scale Invariant Feature Transform (SIFT). Their pairing error is in scale of pixels, usually lies between 1 to 2 pixels. This study suggests that the error on the pairing is the main factor causing the aerial triangulation errors. Therefore, this study proposes the application of Digital Image Correlation (DIC) method instead of the pairing method mentioned above. DIC method can provide a pairing accuracy of less than 0.01 pixel, indeed can greatly enhance the accuracy of the aerial triangulation, to have sub-centimeter level accuracy. In this study, the effects of image pairing error on the measurement error of the 3-dimensional coordinate of the ground points are explored by numerical simulation method. It was confirmed that when the image matching error is reduced to 0.01 pixels, the ground three-dimensional coordinate measurement error can be controlled in mm level. A combination of DIC technique and the traditional aerial triangulation provides the potential of application on the deformation monitoring of slope and structures, and achieve an early warning of natural disaster.

  7. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  8. Improving the geometric fidelity of imaging systems employing sensor arrays

    NASA Technical Reports Server (NTRS)

    Jones, Kenneth L. (Inventor)

    1990-01-01

    A sensor assembly to be carried on an aircraft or spacecraft which will travel along an arbitrary flight path, for providing an image of terrain over which the craft travels, is disclosed. The assembly includes a main linear sensor array and a plurality of auxiliary sensor arrays oriented parallel to, and at respectively different distances from, the main array. By comparing the image signals produced by the main sensor array with those produced by each auxiliary array, information relating to variations in velocity of the craft carrying the assembly can be obtained. The signals from each auxiliary array will provide information relating to a respectively different frequency range.

  9. Towards a framework for agent-based image analysis of remote-sensing data.

    PubMed

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-04-03

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).

  10. Towards a framework for agent-based image analysis of remote-sensing data

    PubMed Central

    Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera

    2015-01-01

    Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916

  11. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  12. Improved Crack Type Classification Neural Network based on Square Sub-images of Pavement Surface

    NASA Astrophysics Data System (ADS)

    Lee, Byoung Jik; Lee, Hosin “David”

    The previous neural network based on the proximity values was developed using rectangular pavement images. However, the proximity value derived from the rectangular image was biased towards transverse cracking. By sectioning the rectangular image into a set of square sub-images, the neural network based on the proximity value became more robust and consistent in determining a crack type. This paper presents an improved neural network to determine a crack type from a pavement surface image based on square sub-images over the neural network trained using rectangular pavement images. The advantage of using square sub-image is demonstrated by using sample images of transverse cracking, longitudinal cracking and alligator cracking.

  13. Image analysis of nucleated red blood cells.

    PubMed

    Zajicek, G; Shohat, M; Melnik, Y; Yeger, A

    1983-08-01

    Bone marrow smears stained with Giemsa were scanned with a video camera under computer control. Forty-two cells representing the six differentiation classes of the red bone marrow were sampled. Each cell was digitized into 70 X 70 pixels, each pixel representing a square area of 0.4 micron2 in the original image. The pixel gray values ranged between 0 and 255. Zero stood for white, 255 represented black, while the numbers in between stood for the various shades of gray. After separation and smoothing the images were processed with a Sobel operator outlining the points of steepest gray level change in the cell. These points constitute a closed curve denominated as inner cell boundary, separating the cell into an inner and an outer region. Two types of features were extracted from each cell: form features, e.g., area and length, and gray level features. Twenty-two features were tested for their discriminative merit. After selecting 16, the discriminant analysis program classified correctly all 42 cells into the 6 classes.

  14. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  15. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  16. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  17. Bolus tracking for improved metabolic imaging of hyperpolarised compounds

    NASA Astrophysics Data System (ADS)

    Durst, Markus; Koellisch, Ulrich; Gringeri, Concetta; Janich, Martin A.; Rancan, Giaime; Frank, Annette; Wiesinger, Florian; Menzel, Marion I.; Haase, Axel; Schulte, Rolf F.

    2014-06-01

    Dynamic nuclear polarisation has enabled real-time metabolic imaging of pyruvate and its metabolites. Conventional imaging sequences rely on predefined settings and do not account for intersubject variations in biological parameters such as perfusion. We present a fully automatic real-time bolus tracking sequence for hyperpolarised substrates which starts the imaging acquisition at a defined point on the bolus curve. This reduces artefacts due to signal change and allows for a more efficient use of hyperpolarised magnetisation. For single time point imaging methods, bolus tracking enables a more reliable and consistent quantification of metabolic activity. An RF excitation with a small flip angle is used to obtain slice-selective pyruvate tracking information in rats. Moreover, in combination with a copolarised urea and pyruvate injection, spectrally selective tracking on urea allows obtaining localised bolus tracking information without depleting the pyruvate signal. Particularly with regard to clinical application, the bolus tracking technique could provide an important step towards a routine assessment protocol which removes operator dependencies and ensures comparable results.

  18. Intelligent Behavioral Action Aiding for Improved Autonomous Image Navigation

    DTIC Science & Technology

    2012-09-13

    86  5.3.4  Steerable cameras...86  5.3.5  Landmark tracking using 2 pairs of steerable cameras ................................ 87  5.3.6 Use of side images...uncertainty [11] compared to using any landmarks. In the same research, it is also mentioned that if a robot’s steerable camera pans and identifies

  19. Indeterminacy and Image Improvement in Snake Infrared ``Vision''

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2006-03-01

    Many snake species have infrared sense organs located on their head that can detect warm-blooded prey even in total darkness. The physical mechanism underlying this sense is that of a pinhole camera. The infrared image is projected onto a sensory `pit membrane' of small size (of order mm^2). To get a neuronal response the energy flux per unit time has to exceed a minimum threshold; furthermore, the source of this energy, the prey, is moving at a finite speed so the pinhole substituting for a lens has to be rather large (˜1 mm). Accordingly the image is totally blurred. We have therefore done two things. First, we have determined the precise optical resolution that a snake can achieve for a given input. Second, in view of known, though still restricted, precision one may ask whether, and how, a snake can reconstruct the original image. The point is that the information needed to reconstruct the original temperature distribution in space is still available. We present an explicit mathematical model [1] allowing even high-quality reconstruction from the low-quality image on the pit membrane and indicate how a neuronal implementation might be realized. Ref: [1] A.B. Sichert, P. Friedel, and J.L. van Hemmen, TU Munich preprint (2005).

  20. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-02-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  1. Correlation between the signal-to-noise ratio improvement factor (KSNR) and clinical image quality for chest imaging with a computed radiography system.

    PubMed

    Moore, C S; Wood, T J; Saunderson, J R; Beavis, A W

    2015-12-07

    This work assessed the appropriateness of the signal-to-noise ratio improvement factor (KSNR) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients  =  80). The KSNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. KSNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and KSNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of  -0.93 (p  =  0.015) was found for lung, a coefficient (R) of  -0.95 (p  =  0.46) was found for spine, and a coefficient (R) of  -0.85 (p  =  0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. KSNR increases with tube voltage but VGAS decreases. Medical physicists should use the KSNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely.

  2. Correlation between the signal-to-noise ratio improvement factor (KSNR) and clinical image quality for chest imaging with a computed radiography system

    NASA Astrophysics Data System (ADS)

    Moore, C. S.; Wood, T. J.; Saunderson, J. R.; Beavis, A. W.

    2015-12-01

    This work assessed the appropriateness of the signal-to-noise ratio improvement factor (KSNR) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients  =  80). The KSNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. KSNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and KSNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of  -0.93 (p  =  0.015) was found for lung, a coefficient (R) of  -0.95 (p  =  0.46) was found for spine, and a coefficient (R) of  -0.85 (p  =  0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. KSNR increases with tube voltage but VGAS decreases. Medical physicists should use the KSNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely.

  3. Generating object proposals for improved object detection in aerial images

    NASA Astrophysics Data System (ADS)

    Sommer, Lars W.; Schuchert, Tobias; Beyerer, Jürgen

    2016-10-01

    Screening of aerial images covering large areas is important for many applications such as surveillance, tracing or rescue tasks. To reduce the workload of image analysts, an automatic detection of candidate objects is required. In general, object detection is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, the huge number of windows to classify, especially in case of multiple object scales, makes these approaches computationally expensive. To overcome this challenge, we reduce the number of candidate windows by generating so called object proposals. Object proposals are a set of candidate regions in an image that are likely to contain an object. We apply the Selective Search approach that has been broadly used as proposals method for detectors like R-CNN or Fast R-CNN. Therefore, a set of small regions is generated by initial segmentation followed by hierarchical grouping of the initial regions to generate proposals at different scales. To reduce the computational costs of the original approach, which consists of 80 combinations of segmentation settings and grouping strategies, we only apply the most appropriate combination. Therefore, we analyze the impact of varying segmentation settings, different merging strategies, and various colour spaces by calculating the recall with regard to the number of object proposals and the intersection over union between generated proposals and ground truth annotations. As aerial images differ considerably from datasets that are typically used for exploring object proposals methods, in particular in object size and the image fraction occupied by an object, we further adapt the Selective Search algorithm to aerial images by replacing the random order of generated proposals by a weighted order based on the object proposal size and integrate a termination criterion for the merging strategies. Finally, the adapted approach is compared to the original Selective Search algorithm

  4. Wavelet-based improved Chan-Vese model for image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaoli; Zhou, Pucheng; Xue, Mogen

    2016-10-01

    In this paper, a kind of image segmentation approach which based on improved Chan-Vese (CV) model and wavelet transform was proposed. Firstly, one-level wavelet decomposition was adopted to get the low frequency approximation image. And then, the improved CV model, which contains the global term, local term and the regularization term, was utilized to segment the low frequency approximation image, so as to obtain the coarse image segmentation result. Finally, the coarse segmentation result was interpolated into the fine scale as an initial contour, and the improved CV model was utilized again to get the fine scale segmentation result. Experimental results show that our method can segment low contrast images and/or inhomogeneous intensity images more effectively than traditional level set methods.

  5. Model-based labeling, analysis, and three-dimensional visualization from two-dimensional medical images

    NASA Astrophysics Data System (ADS)

    Arata, Louis K.; Dhawan, Atam P.; Thomas, Stephen R.

    1991-07-01

    The computerized analysis and interpretation of three-dimensional medical images is of significant interest for diagnosis as well as for studying pathological processes. Knowledge-based image analysis and interpretation of radiological images can provide a tool for identifying and labeling each part of the image. The authors have developed a knowledge-based biomedical image analysis system for interpreting medical images using an anatomical knowledge base of the appropriate organs. In this paper, the structure of the biomedical image analysis system, along with results from the analysis of images of the human chest cavity, are presented. This approach utilizes an image analysis system with the capability of analyzing the data in both bottom-up (or data driven) and top-down (or model driven) modes to improve the recognition process. After an initial identification is achieved, segmented regions are aggregated and features for these aggregates are recomputed and matched to the model. This process continues until a 'best' match is found for the subject model region. Initial results are encouraging; however, much work remains to be done.

  6. Image quality improvement in megavoltage cone beam CT using an imaging beam line and a sintered pixelated array system

    SciTech Connect

    Breitbach, Elizabeth K.; Maltz, Jonathan S.; Gangadharan, Bijumon; Bani-Hashemi, Ali; Anderson, Carryn M.; Bhatia, Sudershan K.; Stiles, Jared; Edwards, Drake S.; Flynn, Ryan T.

    2011-11-15

    Purpose: To quantify the improvement in megavoltage cone beam computed tomography (MVCBCT) image quality enabled by the combination of a 4.2 MV imaging beam line (IBL) with a carbon electron target and a detector system equipped with a novel sintered pixelated array (SPA) of translucent Gd{sub 2}O{sub 2}S ceramic scintillator. Clinical MVCBCT images are traditionally acquired with the same 6 MV treatment beam line (TBL) that is used for cancer treatment, a standard amorphous Si (a-Si) flat panel imager, and the Kodak Lanex Fast-B (LFB) scintillator. The IBL produces a greater fluence of keV-range photons than the TBL, to which the detector response is more optimal, and the SPA is a more efficient scintillator than the LFB. Methods: A prototype IBL + SPA system was installed on a Siemens Oncor linear accelerator equipped with the MVision{sup TM} image guided radiation therapy (IGRT) system. A SPA strip consisting of four neighboring tiles and measuring 40 cm by 10.96 cm in the crossplane and inplane directions, respectively, was installed in the flat panel imager. Head- and pelvis-sized phantom images were acquired at doses ranging from 3 to 60 cGy with three MVCBCT configurations: TBL + LFB, IBL + LFB, and IBL + SPA. Phantom image quality at each dose was quantified using the contrast-to-noise ratio (CNR) and modulation transfer function (MTF) metrics. Head and neck, thoracic, and pelvic (prostate) cancer patients were imaged with the three imaging system configurations at multiple doses ranging from 3 to 15 cGy. The systems were assessed qualitatively from the patient image data. Results: For head and neck and pelvis-sized phantom images, imaging doses of 3 cGy or greater, and relative electron densities of 1.09 and 1.48, the CNR average improvement factors for imaging system change of TBL + LFB to IBL + LFB, IBL + LFB to IBL + SPA, and TBL + LFB to IBL + SPA were 1.63 (p < 10{sup -8}), 1.64 (p < 10{sup -13}), 2.66 (p < 10{sup -9}), respectively. For all imaging

  7. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  8. Acne image analysis: lesion localization and classification

    NASA Astrophysics Data System (ADS)

    Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.

    2016-03-01

    Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.

  9. An improved RANSAC algorithm for line matching on multispectral images

    NASA Astrophysics Data System (ADS)

    Wei, Lijun; Li, Yong; Yu, Hang; Xu, Liangpeng; Fan, Chunxiao

    2017-02-01

    This paper proposes a method for removing mismatched lines on multispectral images. The inaccurate detection of ending points brings a great challenge for matching lines since corresponding lines may not be integrally extracted. Due to the inaccurate detection of ending points, lines are usually mismatched with the line description. To eliminate the mismatched lines, we employ a modified RANSAC (Random Sample Consensus) consisting of two steps: (1) pick three line matches randomly and determine their intersections, which are used to calculate a transformation; (2) the best transformation is obtained by sorting the matching score of line matches and then the inliers are declared as the correct matches. Experimental results show that the proposed method can effectively remove incorrect matches on multispectral images.

  10. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  11. Improved sliced velocity map imaging apparatus optimized for H photofragments

    SciTech Connect

    Ryazanov, Mikhail; Reisler, Hanna

    2013-04-14

    Time-sliced velocity map imaging (SVMI), a high-resolution method for measuring kinetic energy distributions of products in scattering and photodissociation reactions, is challenging to implement for atomic hydrogen products. We describe an ion optics design aimed at achieving SVMI of H fragments in a broad range of kinetic energies (KE), from a fraction of an electronvolt to a few electronvolts. In order to enable consistently thin slicing for any imaged KE range, an additional electrostatic lens is introduced in the drift region for radial magnification control without affecting temporal stretching of the ion cloud. Time slices of {approx}5 ns out of a cloud stretched to Greater-Than-Or-Slanted-Equal-To 50 ns are used. An accelerator region with variable dimensions (using multiple electrodes) is employed for better optimization of radial and temporal space focusing characteristics at each magnification level. The implemented system was successfully tested by recording images of H fragments from the photodissociation of HBr, H{sub 2}S, and the CH{sub 2}OH radical, with kinetic energies ranging from <0.4 eV to >3 eV. It demonstrated KE resolution Less-Than-Or-Equivalent-To 1%-2%, similar to that obtained in traditional velocity map imaging followed by reconstruction, and to KE resolution achieved previously in SVMI of heavier products. We expect it to perform just as well up to at least 6 eV of kinetic energy. The tests showed that numerical simulations of the electric fields and ion trajectories in the system, used for optimization of the design and operating parameters, provide an accurate and reliable description of all aspects of system performance. This offers the advantage of selecting the best operating conditions in each measurement without the need for additional calibration experiments.

  12. Automated Spot Mammography for Improved Imaging of Dense Breasts

    DTIC Science & Technology

    2004-10-01

    Develop breast phantoms ................................................... 20 G) Task 7: Explore possible advantages of using stereo-spot mammo...performed an experiment in which we took full-field and stereo spot collimated images of a custom-made stereoscopic breast phantom (CIRS, Inc...didn’t receive a modular breast phantom from the manufacturer that came even close to meeting our design specifications until very late in the project

  13. Retinex Image Processing: Improved Fidelity To Direct Visual Observation

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.

    1996-01-01

    Recorded color images differ from direct human viewing by the lack of dynamic range compression and color constancy. Research is summarized which develops the center/surround retinex concept originated by Edwin Land through a single scale design to a multi-scale design with color restoration (MSRCR). The MSRCR synthesizes dynamic range compression, color constancy, and color rendition and, thereby, approaches fidelity to direct observation.

  14. Bias Field Inconsistency Correction of Motion-Scattered Multislice MRI for Improved 3D Image Reconstruction

    PubMed Central

    Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin

    2012-01-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561

  15. Bias field inconsistency correction of motion-scattered multislice MRI for improved 3D image reconstruction.

    PubMed

    Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin

    2011-09-01

    A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.

  16. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  17. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  18. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  19. Improved passive millimeter-wave imaging from a helicopter platform

    NASA Astrophysics Data System (ADS)

    Martin, Christopher A.; Kolinko, Vladimir G.

    2004-08-01

    A second-generation passive millimeter-wave imaging system is being prepared for flight testing on a UH-1H "Huey" helicopter platform. Passive millimeter-wave sensors form images through collection of blackbody emissions in the millimeter-wave portion of the electromagnetic spectrum. Radiation at this wavelength is not significantly scattered or absorbed by fog, clouds, smoke, or fine dust, which may blind other electro-optic sensors. Additionally, millimeter-wave imagery depends on a phenomenology based on reflection rather than emission, which produces a high level of contrast for metal targets. The system to be flight tested operates in the W-band region of the spectrum at a 30 Hz frame rate. The field-of-view of the system is 20 x 30 degrees and the system temperature resolution is less than 3 degrees. The system uses a pupil-plane phased-array architecture that allows the large aperture system to retain a compact form factor appropriate for airborne applications. The flight test is to include demonstrations of navigation with the system in a look-forward mode, targeting and reconnaissance with the system in a look down mode at 45 degrees, and landing aid with the system looking straight down. Potential targets include military and non-military vehicles, roads, rivers, and other landmarks, and terrain features. The flight test is scheduled to be completed in April 2004 with images available soon thereafter.

  20. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    PubMed

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  1. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  2. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  3. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  4. SAR Image Texture Analysis of Oil Spill

    NASA Astrophysics Data System (ADS)

    Ma, Long; Li, Ying; Liu, Yu

    Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they have serious affect on fragile marine and coastal ecosystem. In order to implement an emergency in case of oil spills, it is necessary to monitor oil spill using remote sensing. Spaceborne SAR is considered a promising method to monitor oil spill, which causes attention from many researchers. However, research in SAR image texture analysis of oil spill is rarely reported. On 7 December 2007, a crane-carrying barge hit the Hong Kong-registered tanker "Hebei Spirit", which released an estimated 10,500 metric tons of crude oil into the sea. The texture features on this oil spill were acquired based on extracted GLCM (Grey Level Co-occurrence Matrix) by using SAR as data source. The affected area was extracted successfully after evaluating capabilities of different texture features to monitor the oil spill. The results revealed that the texture is an important feature for oil spill monitoring. Key words: oil spill, texture analysis, SAR

  5. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    PubMed

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection.

  6. Representative Image Subsets in Soil Analysis Using the Mars Exploration Rover Microscopic Imager

    NASA Astrophysics Data System (ADS)

    Cabrol, N. A.; Herkenhoff, K. E.; Grin, E. A.

    2009-12-01

    Assessing texture and morphology is a critical step in evaluating the plausible origin and evolution of soil particles. Both are essential to the understanding of martian soils beyond Gusev and Meridiani. In addition to supporting rover operations, what is being learned at both landing sites about soil physical characteristics and properties provides essential keys to model with more precision the nature of martian soils at global scale from the correlation of ground-based and orbital data. Soil and particles studies will improve trafficability predictions for future missions, whether robotic or human, ultimately increasing safety and mission productivity. Data can also be used to assist in pre-mission hardware testing, and during missions to support engineering activities including rover extrication, which makes the characterization of soils at the particle level and their mixing critical. On Mars, this assessment is performed within the constraints of the rover’s instrumentation. The Microscopic Imager allows the identification of particles ≥ 100 µm across. Individual particles of clay, silt and very fine sand are not accessible. Texture is, thus, defined here as the relative proportion of particles ≥ 100 µm. Analytical methods are consistent with standard sedimentologic techniques applied to the study of thin sections and digital images on terrestrial soils. Those have known constraints and biases and are well adapted to the limitations of documenting three-dimensional particles on Mars through the two-dimensional FoV of the MI. Biases and errors are linked to instrument resolution and particle size. Precision improves with increasing size and is unlikely to be consistent in the study of composite soil samples. Here, we address how to obtain a statistically sound and accurate representation of individual particles and soil mixings without analyzing entire MI images. The objectives are to (a) understand the role of particle-size in selecting statistically

  7. Automated data selection method to improve robustness of diffuse optical tomography for breast cancer imaging

    PubMed Central

    Vavadi, Hamed; Zhu, Quing

    2016-01-01

    Imaging-guided near infrared diffuse optical tomography (DOT) has demonstrated a great potential as an adjunct modality for differentiation of malignant and benign breast lesions and for monitoring treatment response of breast cancers. However, diffused light measurements are sensitive to artifacts caused by outliers and errors in measurements due to probe-tissue coupling, patient and probe motions, and tissue heterogeneity. In general, pre-processing of the measurements is needed by experienced users to manually remove these outliers and therefore reduce imaging artifacts. An automated method of outlier removal, data selection, and filtering for diffuse optical tomography is introduced in this manuscript. This method consists of multiple steps to first combine several data sets collected from the same patient at contralateral normal breast and form a single robust reference data set using statistical tests and linear fitting of the measurements. The second step improves the perturbation measurements by filtering out outliers from the lesion site measurements using model based analysis. The results of 20 malignant and benign cases show similar performance between manual data processing and automated processing and improvement in tissue characterization of malignant to benign ratio by about 27%. PMID:27867711

  8. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  9. Design and validation of Segment - freely available software for cardiovascular image analysis

    PubMed Central

    2010-01-01

    Background Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Results Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http

  10. A Global Approach to Image Texture Analysis

    DTIC Science & Technology

    1990-03-01

    segmented images based on texture by convolution with small masks ranging from 3 x 3 to 7 x 7 pixels. The local approach is not optimal for the sea ice... image , then differences of texture will clearly be reflected in the two-dimensional po~wer spectrum of the image . To look at spectral distribution...resulting from convolutions with Laws’ masks are actually the vaiues of image energy falling in a series of spectral bins. Consider the seventh.-order

  11. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  12. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  13. Improved and robust detection of cell nuclei from four dimensional fluorescence images.

    PubMed

    Bashar, Md Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J

    2014-01-01

    Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9% over the previous methods

  14. Improved Shear Wave Motion Detection Using Pulse-Inversion Harmonic Imaging With a Phased Array Transducer.

    PubMed

    Pengfei Song; Heng Zhao; Urban, Matthew W; Manduca, Armando; Pislaru, Sorin V; Kinnick, Randall R; Pislaru, Cristina; Greenleaf, James F; Shigao Chen

    2013-12-01

    Ultrasound tissue harmonic imaging is widely used to improve ultrasound B-mode imaging quality thanks to its effectiveness in suppressing imaging artifacts associated with ultrasound reverberation, phase aberration, and clutter noise. In ultrasound shear wave elastography (SWE), because the shear wave motion signal is extracted from the ultrasound signal, these noise sources can significantly deteriorate the shear wave motion tracking process and consequently result in noisy and biased shear wave motion detection. This situation is exacerbated in in vivo SWE applications such as heart, liver, and kidney. This paper, therefore, investigated the possibility of implementing harmonic imaging, specifically pulse-inversion harmonic imaging, in shear wave tracking, with the hypothesis that harmonic imaging can improve shear wave motion detection based on the same principles that apply to general harmonic B-mode imaging. We first designed an experiment with a gelatin phantom covered by an excised piece of pork belly and show that harmonic imaging can significantly improve shear wave motion detection by producing less underestimated shear wave motion and more consistent shear wave speed measurements than fundamental imaging. Then, a transthoracic heart experiment on a freshly sacrificed pig showed that harmonic imaging could robustly track the shear wave motion and give consistent shear wave speed measurements of the left ventricular myocardium while fundamental imaging could not. Finally, an in vivo transthoracic study of seven healthy volunteers showed that the proposed harmonic imaging tracking sequence could provide consistent estimates of the left ventricular myocardium stiffness in end-diastole with a general success rate of 80% and a success rate of 93.3% when excluding the subject with Body Mass Index higher than 25. These promising results indicate that pulse-inversion harmonic imaging can significantly improve shear wave motion tracking and thus potentially

  15. An improved robust blind motion de-blurring algorithm for remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yulong; Liu, Jin; Liang, Yonghui

    2016-10-01

    Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.

  16. <