Science.gov

Sample records for improving image analysis

  1. Improving lip wrinkles: lipstick-related image analysis.

    PubMed

    Ryu, Jong-Seong; Park, Sun-Gyoo; Kwak, Taek-Jong; Chang, Min-Youl; Park, Moon-Eok; Choi, Khee-Hwan; Sung, Kyung-Hye; Shin, Hyun-Jong; Lee, Cheon-Koo; Kang, Yun-Seok; Yoon, Moung-Seok; Rang, Moon-Jeong; Kim, Seong-Jin

    2005-08-01

    The appearance of lip wrinkles is problematic if it is adversely influenced by lipstick make-up causing incomplete color tone, spread phenomenon and pigment remnants. It is mandatory to develop an objective assessment method for lip wrinkle status by which the potential of wrinkle-improving products to lips can be screened. The present study is aimed at finding out the useful parameters from the image analysis of lip wrinkles that is affected by lipstick application. The digital photograph image of lips before and after lipstick application was assessed from 20 female volunteers. Color tone was measured by Hue, Saturation and Intensity parameters, and time-related pigment spread was calculated by the area over vermilion border by image-analysis software (Image-Pro). The efficacy of wrinkle-improving lipstick containing asiaticoside was evaluated from 50 women by using subjective and objective methods including image analysis in a double-blind placebo-controlled fashion. The color tone and spread phenomenon after lipstick make-up were remarkably affected by lip wrinkles. The level of standard deviation by saturation value of image-analysis software was revealed as a good parameter for lip wrinkles. By using the lipstick containing asiaticoside for 8 weeks, the change of visual grading scores and replica analysis indicated the wrinkle-improving effect. As the depth and number of wrinkles were reduced, the lipstick make-up appearance by image analysis also improved significantly. The lip wrinkle pattern together with lipstick make-up can be evaluated by the image-analysis system in addition to traditional assessment methods. Thus, this evaluation system is expected to test the efficacy of wrinkle-reducing lipstick that was not described in previous dermatologic clinical studies.

  2. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  3. Improved sampling and analysis of images in corneal confocal microscopy.

    PubMed

    Schaldemose, E L; Fontain, F I; Karlsson, P; Nyengaard, J R

    2017-10-01

    Corneal confocal microscopy (CCM) is a noninvasive clinical method to analyse and quantify corneal nerve fibres in vivo. Although the CCM technique is in constant progress, there are methodological limitations in terms of sampling of images and objectivity of the nerve quantification. The aim of this study was to present a randomized sampling method of the CCM images and to develop an adjusted area-dependent image analysis. Furthermore, a manual nerve fibre analysis method was compared to a fully automated method. 23 idiopathic small-fibre neuropathy patients were investigated using CCM. Corneal nerve fibre length density (CNFL) and corneal nerve fibre branch density (CNBD) were determined in both a manual and automatic manner. Differences in CNFL and CNBD between (1) the randomized and the most common sampling method, (2) the adjusted and the unadjusted area and (3) the manual and automated quantification method were investigated. The CNFL values were significantly lower when using the randomized sampling method compared to the most common method (p = 0.01). There was not a statistical significant difference in the CNBD values between the randomized and the most common sampling method (p = 0.85). CNFL and CNBD values were increased when using the adjusted area compared to the standard area. Additionally, the study found a significant increase in the CNFL and CNBD values when using the manual method compared to the automatic method (p ≤ 0.001). The study demonstrated a significant difference in the CNFL values between the randomized and common sampling method indicating the importance of clear guidelines for the image sampling. The increase in CNFL and CNBD values when using the adjusted cornea area is not surprising. The observed increases in both CNFL and CNBD values when using the manual method of nerve quantification compared to the automatic method are consistent with earlier findings. This study underlines the importance of improving the analysis of the

  4. Multi-spectral image analysis for improved space object characterization

    NASA Astrophysics Data System (ADS)

    Glass, William; Duggin, Michael J.; Motes, Raymond A.; Bush, Keith A.; Klein, Meiling

    2009-08-01

    The Air Force Research Laboratory (AFRL) is studying the application and utility of various ground-based and space-based optical sensors for improving surveillance of space objects in both Low Earth Orbit (LEO) and Geosynchronous Earth Orbit (GEO). This information can be used to improve our catalog of space objects and will be helpful in the resolution of satellite anomalies. At present, ground-based optical and radar sensors provide the bulk of remotely sensed information on satellites and space debris, and will continue to do so into the foreseeable future. However, in recent years, the Space-Based Visible (SBV) sensor was used to demonstrate that a synthesis of space-based visible data with ground-based sensor data could provide enhancements to information obtained from any one source in isolation. The incentives for space-based sensing include improved spatial resolution due to the absence of atmospheric effects and cloud cover and increased flexibility for observations. Though ground-based optical sensors can use adaptive optics to somewhat compensate for atmospheric turbulence, cloud cover and absorption are unavoidable. With recent advances in technology, we are in a far better position to consider what might constitute an ideal system to monitor our surroundings in space. This work has begun at the AFRL using detailed optical sensor simulations and analysis techniques to explore the trade space involved in acquiring and processing data from a variety of hypothetical space-based and ground-based sensor systems. In this paper, we briefly review the phenomenology and trade space aspects of what might be required in order to use multiple band-passes, sensor characteristics, and observation and illumination geometries to increase our awareness of objects in space.

  5. Multi-Spectral Image Analysis for Improved Space Object Characterization

    NASA Astrophysics Data System (ADS)

    Duggin, M.; Riker, J.; Glass, W.; Bush, K.; Briscoe, D.; Klein, M.; Pugh, M.; Engberg, B.

    The Air Force Research Laboratory (AFRL) is studying the application and utility of various ground based and space-based optical sensors for improving surveillance of space objects in both Low Earth Orbit (LEO) and Geosynchronous Earth Orbit (GEO). At present, ground-based optical and radar sensors provide the bulk of remotely sensed information on satellites and space debris, and will continue to do so into the foreseeable future. However, in recent years, the Space Based Visible (SBV) sensor was used to demonstrate that a synthesis of space-based visible data with ground-based sensor data could provide enhancements to information obtained from any one source in isolation. The incentives for space-based sensing include improved spatial resolution due to the absence of atmospheric effects and cloud cover and increased flexibility for observations. Though ground-based optical sensors can use adaptive optics to somewhat compensate for atmospheric turbulence, cloud cover and absorption are unavoidable. With recent advances in technology, we are in a far better position to consider what might constitute an ideal system to monitor our surroundings in space. This work has begun at the AFRL using detailed optical sensor simulations and analysis techniques to explore the trade space involved in acquiring and processing data from a variety of hypothetical space-based and ground-based sensor systems. In this paper, we briefly review the phenomenology and trade space aspects of what might be required in order to use multiple band-passes, sensor characteristics, and observation and illumination geometries to increase our awareness of objects in space.

  6. Improving Resolution and Depth of Astronomical Observations via Modern Mathematical Methods for Image Analysis

    NASA Astrophysics Data System (ADS)

    Castellano, M.; Ottaviani, D.; Fontana, A.; Merlin, E.; Pilo, S.; Falcone, M.

    2015-09-01

    In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

  7. The influence of bonding agents in improving interactions in composite propellants determined using image analysis.

    PubMed

    Dostanić, J; Husović, T V; Usćumlić, G; Heinemann, R J; Mijin, D

    2008-12-01

    Binder-oxidizer interactions in rocket composite propellants can be improved using adequate bonding agents. In the present work, the effectiveness of different 1,3,5-trisubstituted isocyanurates was determined by stereo and metallographic microscopy and using the software package Image-Pro Plus. The chemical analysis of samples was performed by a scanning electron microscope equipped for energy dispersive spectrometry.

  8. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method.

    PubMed

    Lu, Zhaolin; Hu, Xiaojuan; Lu, Yao

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis.

  9. Particle Morphology Analysis of Biomass Material Based on Improved Image Processing Method

    PubMed Central

    Lu, Zhaolin

    2017-01-01

    Particle morphology, including size and shape, is an important factor that significantly influences the physical and chemical properties of biomass material. Based on image processing technology, a method was developed to process sample images, measure particle dimensions, and analyse the particle size and shape distributions of knife-milled wheat straw, which had been preclassified into five nominal size groups using mechanical sieving approach. Considering the great variation of particle size from micrometer to millimeter, the powders greater than 250 μm were photographed by a flatbed scanner without zoom function, and the others were photographed using a scanning electron microscopy (SEM) with high-image resolution. Actual imaging tests confirmed the excellent effect of backscattered electron (BSE) imaging mode of SEM. Particle aggregation is an important factor that affects the recognition accuracy of the image processing method. In sample preparation, the singulated arrangement and ultrasonic dispersion methods were used to separate powders into particles that were larger and smaller than the nominal size of 250 μm. In addition, an image segmentation algorithm based on particle geometrical information was proposed to recognise the finer clustered powders. Experimental results demonstrated that the improved image processing method was suitable to analyse the particle size and shape distributions of ground biomass materials and solve the size inconsistencies in sieving analysis. PMID:28298925

  10. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user

  11. Improved disparity map analysis through the fusion of monocular image segmentations

    NASA Technical Reports Server (NTRS)

    Perlant, Frederic P.; Mckeown, David M.

    1991-01-01

    The focus is to examine how estimates of three dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. The utilization of surface illumination information is provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Stereo analysis results are presented on a complex urban scene containing various man-made and natural features. This scene contains a variety of problems including low building height with respect to the stereo baseline, buildings and roads in complex terrain, and highly textured buildings and terrain. The improvements are demonstrated due to monocular fusion with a set of different region-based image segmentations. The generality of this approach to stereo analysis and its utility in the development of general three dimensional scene interpretation systems are also discussed.

  12. Using image analysis and ArcGIS® to improve automatic grain boundary detection and quantify geological images

    NASA Astrophysics Data System (ADS)

    DeVasto, Michael A.; Czeck, Dyanna M.; Bhattacharyya, Prajukti

    2012-12-01

    Geological images, such as photos and photomicrographs of rocks, are commonly used as supportive evidence to indicate geological processes. A limiting factor to quantifying images is the digitization process; therefore, image analysis has remained largely qualitative. ArcGIS®, the most widely used Geographic Information System (GIS) available, is capable of an array of functions including building models capable of digitizing images. We expanded upon a previously designed model built using Arc ModelBuilder® to quantify photomicrographs and scanned images of thin sections. In order to enhance grain boundary detection, but limit computer processing and hard drive space, we utilized a preprocessing image analysis technique such that only a single image is used in the digitizing model. Preprocessing allows the model to accurately digitize grain boundaries with fewer images and requires less user intervention by using batch processing in image analysis software and ArcCatalog®. We present case studies for five basic textural analyses using a semi-automated digitized image and quantified in ArcMap®. Grain Size Distributions, Shape Preferred Orientations, Weak phase connections (networking), and Nearest Neighbor statistics are presented in a simplified fashion for further analyses directly obtainable from the automated digitizing method. Finally, we discuss the ramifications for incorporating this method into geological image analyses.

  13. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  14. Improved MALDI imaging MS analysis of phospholipids using graphene oxide as new matrix

    PubMed Central

    Wang, Zhongjie; Cai, Yan; Wang, Yi; Zhou, Xinwen; Zhang, Ying; Lu, Haojie

    2017-01-01

    Matrix assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS) is an increasingly important technique for detection and spatial localization of phospholipids on tissue. Due to the high abundance and being easy-to-ionize of phosphatidylcholine (PC), therefore, selecting matrix to yield signals of other lipids has become the most crucial factor for a successful MALDI-IMS analysis of phospholipids. Herein, graphene oxide (GO) was proposed as a new matrix to selectively enhance the detection of other types of phospholipids that are frequently suppressed by the presence of PC in positive mode. Compared to the commonly used matrix DHB, GO matrix significantly improved signal-to-noise ratios of phospholipids as a result of its high desorption/ionization efficiency for nonpolar compounds. Also, GO afforded homogeneous crystallizations with analytes due to its monolayer structure and good dispersion, resulting in better reproducibility of shot-to-shot (CV < 13%) and spot-to-spot (CV < 14%) analysis. Finally, GO matrix was successfully applied to simultaneous imaging of PC, PE, PS and glycosphingolipid in the mouse brain, with a total of 65 phospholipids identified. PMID:28294158

  15. The Comet Assay: Automated Imaging Methods for Improved Analysis and Reproducibility

    PubMed Central

    Braafladt, Signe; Reipa, Vytas; Atha, Donald H.

    2016-01-01

    Sources of variability in the comet assay include variations in the protocol used to process the cells, the microscope imaging system and the software used in the computerized analysis of the images. Here we focus on the effect of variations in the microscope imaging system and software analysis using fixed preparations of cells and a single cell processing protocol. To determine the effect of the microscope imaging and analysis on the measured percentage of damaged DNA (% DNA in tail), we used preparations of mammalian cells treated with etoposide or electrochemically induced DNA damage conditions and varied the settings of the automated microscope, camera, and commercial image analysis software. Manual image analysis revealed measurement variations in percent DNA in tail as high as 40% due to microscope focus, camera exposure time and the software image intensity threshold level. Automated image analysis reduced these variations as much as three-fold, but only within a narrow range of focus and exposure settings. The magnitude of variation, observed using both analysis methods, was highly dependent on the overall extent of DNA damage in the particular sample. Mitigating these sources of variability with optimal instrument settings facilitates an accurate evaluation of cell biological variability. PMID:27581626

  16. Improved Sectional Image Analysis Technique for Evaluating Fiber Orientations in Fiber-Reinforced Cement-Based Materials

    PubMed Central

    Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong

    2016-01-01

    The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis. PMID:28787839

  17. Improved Sectional Image Analysis Technique for Evaluating Fiber Orientations in Fiber-Reinforced Cement-Based Materials.

    PubMed

    Lee, Bang Yeon; Kang, Su-Tae; Yun, Hae-Bum; Kim, Yun Yong

    2016-01-12

    The distribution of fiber orientation is an important factor in determining the mechanical properties of fiber-reinforced concrete. This study proposes a new image analysis technique for improving the evaluation accuracy of fiber orientation distribution in the sectional image of fiber-reinforced concrete. A series of tests on the accuracy of fiber detection and the estimation performance of fiber orientation was performed on artificial fiber images to assess the validity of the proposed technique. The validation test results showed that the proposed technique estimates the distribution of fiber orientation more accurately than the direct measurement of fiber orientation by image analysis.

  18. Using a Structural Root System Model to Evaluate and Improve the Accuracy of Root Image Analysis Pipelines.

    PubMed

    Lobet, Guillaume; Koevoets, Iko T; Noll, Manuel; Meyer, Patrick E; Tocquin, Pierre; Pagès, Loïc; Périlleux, Claire

    2017-01-01

    Root system analysis is a complex task, often performed with fully automated image analysis pipelines. However, the outcome is rarely verified by ground-truth data, which might lead to underestimated biases. We have used a root model, ArchiSimple, to create a large and diverse library of ground-truth root system images (10,000). For each image, three levels of noise were created. This library was used to evaluate the accuracy and usefulness of several image descriptors classically used in root image analysis softwares. Our analysis highlighted that the accuracy of the different traits is strongly dependent on the quality of the images and the type, size, and complexity of the root systems analyzed. Our study also demonstrated that machine learning algorithms can be trained on a synthetic library to improve the estimation of several root system traits. Overall, our analysis is a call to caution when using automatic root image analysis tools. If a thorough calibration is not performed on the dataset of interest, unexpected errors might arise, especially for large and complex root images. To facilitate such calibration, both the image library and the different codes used in the study have been made available to the community.

  19. Analysis of myocardial perfusion from vasodilator stress computed tomography: does improvement in image quality by iterative reconstruction lead to improved diagnostic accuracy?

    PubMed

    Bhave, Nicole M; Mor-Avi, Victor; Kachenoura, Nadjia; Freed, Benjamin H; Vannier, Michael; Dill, Karin; Lang, Roberto M; Patel, Amit R

    2014-01-01

    Iterative reconstruction (IR) in cardiac CT has been shown to improve confidence of interpretation of noninvasive coronary CT angiography (CTA). We hypothesized that IR would also improve the quality of vasodilator stress coronary CT images acquired with low tube voltage to assess myocardial perfusion and the accuracy of the detection of perfusion abnormalities by using quantitative 3-dimensional (3D) analysis. We studied 39 consecutive patients referred for coronary CTA (256-slice scanner; Philips), who underwent additional imaging (100 kV, prospective gating) with regadenoson (0.4 mg; Astellas). Stress images were reconstructed with different algorithms: filtered back projection (FBP) and IR (iDose; Philips). Image quality was quantified by signal-to-noise and contrast-to-noise ratios in the blood pool and the myocardium. Then, FBP and separately IR images were analyzed with custom 3D analysis software to quantitatively detect perfusion defects. Accuracy of detection was compared with perfusion abnormalities predicted by coronary stenosis >50% on coronary CTA. Five patients with image artifacts were excluded. In the remaining 34 patients, both signal-to-noise and contrast-to-noise ratios increased with IR, indicating improvement in image quality compared with FBP. For 3D perfusion analysis, 10 patients with normal coronary arteries were used as a reference to correct for x-ray attenuation variations in normal myocardium. In the remaining 24 patients, reduced noise levels in the IR images compared with FBP resulted in tighter attenuation distribution and improved detection of perfusion abnormalities. IR significantly improves image quality on regadenoson stress CT images acquired with low tube voltage, leading to improved 3D quantitative evaluation of myocardial perfusion. Copyright © 2014 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  20. Improved Linear Contrast-Enhanced Ultrasound Imaging via Analysis of First-Order Speckle Statistics.

    PubMed

    Lowerison, Matthew R; Hague, M Nicole; Chambers, Ann F; Lacefield, James C

    2016-09-01

    The linear subtraction methods commonly used for preclinical contrast-enhanced imaging are susceptible to registration errors and motion artifacts that lead to reduced contrast-to-tissue ratios. To address this limitation, a new approach to linear contrast-enhanced ultrasound (CEUS) is proposed based on the analysis of the temporal dynamics of the speckle statistics during wash-in of a bolus injection of microbubbles. In the proposed method, the speckle signal is approximated as a mixture of temporally varying random processes, representing the microbubble signal, superimposed onto spatially heterogeneous tissue backscatter in multiple subvolumes within the region of interest. A wash-in curve is constructed by plotting the effective degrees of freedom (EDoFs) of the histogram of the speckle signal as a function of time. The proposed method is, therefore, named the EDoF method. The EDoF parameter is proportional to the shape parameter of the Nakagami distribution. Images acquired at 18 MHz from a murine mammary fat pad breast cancer xenograft model were processed using gold-standard nonlinear amplitude modulation, conventional linear subtraction, and the proposed statistical method. The EDoF method shows promise for improving the robustness of linear CEUS based on reduced frame-to-frame variability compared with the conventional linear subtraction time-intensity curves. Wash-in curve parameters estimated using the EDoF method also demonstrate higher correlation to nonlinear CEUS than the conventional linear method. The conceptual basis of the statistical method implies that EDoF wash-in curves may carry information about vascular complexity that could provide valuable new imaging biomarkers for cancer research.

  1. Basics of image analysis

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  2. Image quality analysis and improvement of Ladar reflective tomography for space object recognition

    NASA Astrophysics Data System (ADS)

    Wang, Jin-cheng; Zhou, Shi-wei; Shi, Liang; Hu, Yi-Hua; Wang, Yong

    2016-01-01

    Some problems in the application of Ladar reflective tomography for space object recognition are studied in this work. An analytic target model is adopted to investigate the image reconstruction properties with limited relative angle range, which are useful to verify the target shape from the incomplete image, analyze the shadowing effect of the target and design the satellite payloads against recognition via reflective tomography approach. We proposed an iterative maximum likelihood method basing on Bayesian theory, which can effectively compress the pulse width and greatly improve the image resolution of incoherent LRT system without loss of signal to noise ratio.

  3. Non-rigid registration and non-local principle component analysis to improve electron microscopy spectrum images.

    PubMed

    Yankovich, Andrew B; Zhang, Chenyu; Oh, Albert; Slater, Thomas J A; Azough, Feridoon; Freer, Robert; Haigh, Sarah J; Willett, Rebecca; Voyles, Paul M

    2016-09-09

    Image registration and non-local Poisson principal component analysis (PCA) denoising improve the quality of characteristic x-ray (EDS) spectrum imaging of Ca-stabilized Nd2/3TiO3 acquired at atomic resolution in a scanning transmission electron microscope. Image registration based on the simultaneously acquired high angle annular dark field image significantly outperforms acquisition with a long pixel dwell time or drift correction using a reference image. Non-local Poisson PCA denoising reduces noise more strongly than conventional weighted PCA while preserving atomic structure more faithfully. The reliability of and optimal internal parameters for non-local Poisson PCA denoising of EDS spectrum images is assessed using tests on phantom data.

  4. Non-rigid registration and non-local principle component analysis to improve electron microscopy spectrum images

    NASA Astrophysics Data System (ADS)

    Yankovich, Andrew B.; Zhang, Chenyu; Oh, Albert; Slater, Thomas J. A.; Azough, Feridoon; Freer, Robert; Haigh, Sarah J.; Willett, Rebecca; Voyles, Paul M.

    2016-09-01

    Image registration and non-local Poisson principal component analysis (PCA) denoising improve the quality of characteristic x-ray (EDS) spectrum imaging of Ca-stabilized Nd2/3TiO3 acquired at atomic resolution in a scanning transmission electron microscope. Image registration based on the simultaneously acquired high angle annular dark field image significantly outperforms acquisition with a long pixel dwell time or drift correction using a reference image. Non-local Poisson PCA denoising reduces noise more strongly than conventional weighted PCA while preserving atomic structure more faithfully. The reliability of and optimal internal parameters for non-local Poisson PCA denoising of EDS spectrum images is assessed using tests on phantom data.

  5. Improving cervical region of interest by eliminating vaginal walls and cotton-swabs for automated image analysis

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sankar; Li, Wenjing

    2008-03-01

    Image analysis for automated diagnosis of cervical cancer has attained high prominence in the last decade. Automated image analysis at all levels requires a basic segmentation of the region of interest (ROI) within a given image. The precision of the diagnosis is often reflected by the precision in detecting the initial region of interest, especially when some features outside the ROI mimic the ones within the same. Work described here discusses algorithms that are used to improve the cervical region of interest as a part of automated cervical image diagnosis. A vital visual aid in diagnosing cervical cancer is the aceto-whitening of the cervix after the application of acetic acid. Color and texture are used to segment acetowhite regions within the cervical ROI. Vaginal walls along with cottonswabs sometimes mimic these essential features leading to several false positives. Work presented here is focused towards detecting in-focus vaginal wall boundaries and then extrapolating them to exclude vaginal walls from the cervical ROI. In addition, discussed here is a marker-controlled watershed segmentation that is used to detect cottonswabs from the cervical ROI. A dataset comprising 50 high resolution images of the cervix acquired after 60 seconds of acetic acid application were used to test the algorithm. Out of the 50 images, 27 benefited from a new cervical ROI. Significant improvement in overall diagnosis was observed in these images as false positives caused by features outside the actual ROI mimicking acetowhite region were eliminated.

  6. Seismic tomography and MASW as a tools improving Imaging - uncertainty analysis.

    NASA Astrophysics Data System (ADS)

    Marciniak, Artur; Majdański, Mariusz

    2017-04-01

    In recent years, near surface seismic imaging become topic of interest for geoengineers and geologists. In connection with other seismic methods like MASW and travel time tomography, seismic imaging can provide more complete model of shallow structures with analysis of uncertainty. Often forgotten, uncertainty analysis provide useful information for data interpretation, reducing possibility of mistakes in model applied projects. Moreover, application of different methods provide complete utilization of acquired data for in-depth interpretation, or with possibility to solve problems in other surveys. Applying different processing methods for the same raw data allowed authors to receive more accurate final result, with uncertainty analysis based on more complete dataset in comparison to the classical survey scheme.

  7. Method for Removing Spectral Contaminants to Improve Analysis of Raman Imaging Data

    PubMed Central

    Zhang, Xun; Chen, Sheng; Ling, Zhe; Zhou, Xia; Ding, Da-Yong; Kim, Yoon Soo; Xu, Feng

    2017-01-01

    The spectral contaminants are inevitable during micro-Raman measurements. A key challenge is how to remove them from the original imaging data, since they can distort further results of data analysis. Here, we propose a method named “automatic pre-processing method for Raman imaging data set (APRI)”, which includes the adaptive iteratively reweighted penalized least-squares (airPLS) algorithm and the principal component analysis (PCA). It eliminates the baseline drifts and cosmic spikes by using the spectral features themselves. The utility of APRI is illustrated by removing the spectral contaminants from a Raman imaging data set of a wood sample. In addition, APRI is computationally efficient, conceptually simple and potential to be extended to other methods of spectroscopy, such as infrared (IR), nuclear magnetic resonance (NMR), X-Ray Diffraction (XRD). With the help of our approach, a typical spectral analysis can be performed by a non-specialist user to obtain useful information from a spectroscopic imaging data set. PMID:28054587

  8. Improved triangular prism methods for fractal analysis of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Fung, Tung; Leung, Yee

    2016-05-01

    Feature extraction has been a major area of research in remote sensing, and fractal feature is a natural characterization of complex objects across scales. Extending on the modified triangular prism (MTP) method, we systematically discuss three factors closely related to the estimation of fractal dimensions of remotely sensed images. They are namely the (F1) number of steps, (F2) step size, and (F3) estimation accuracy of the facets' areas of the triangular prisms. Differing from the existing improved algorithms that separately consider these factors, we simultaneously take all factors to construct three new algorithms, namely the modification of the eight-pixel algorithm, the four corner and the moving-average MTP. Numerical experiments based on 4000 generated images show their superior performances over existing algorithms: our algorithms not only overcome the limitation of image size suffered by existing algorithms but also obtain similar average fractal dimension with smaller standard deviation, only 50% for images with high fractal dimensions. In the case of real-life application, our algorithms more likely obtain fractal dimensions within the theoretical range. Thus, the fractal nature uncovered by our algorithms is more reasonable in quantifying the complexity of remotely sensed images. Despite the similar performance of these three new algorithms, the moving-average MTP can mitigate the sensitivity of the MTP to noise and extreme values. Based on the numerical and real-life case study, we check the effect of the three factors, (F1)-(F3), and demonstrate that these three factors can be simultaneously considered for improving the performance of the MTP method.

  9. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  10. Improved factor analysis of dynamic PET images to estimate arterial input function and tissue curves

    NASA Astrophysics Data System (ADS)

    Boutchko, Rostyslav; Mitra, Debasis; Pan, Hui; Jagust, William; Gullberg, Grant T.

    2015-03-01

    Factor analysis of dynamic structures (FADS) is a methodology of extracting time-activity curves (TACs) for corresponding different tissue types from noisy dynamic images. The challenges of FADS include long computation time and sensitivity to the initial guess, resulting in convergence to local minima far from the true solution. We propose a method of accelerating and stabilizing FADS application to sequences of dynamic PET images by adding preliminary cluster analysis of the time activity curves for individual voxels. We treat the temporal variation of individual voxel concentrations as a set of time-series and use a partial clustering analysis to identify the types of voxel TACs that are most functionally distinct from each other. These TACs provide a good initial guess for the temporal factors for subsequent FADS processing. Applying this approach to a set of single slices of dynamic 11C-PIB images of the brain allows identification of the arterial input function and two different tissue TACs that are likely to correspond to the specific and non-specific tracer binding-tissue types. These results enable us to perform direct classification of tissues based on their pharmacokinetic properties in dynamic PET without relying on a compartment-based kinetic model, without identification of the reference region, or without using any external methods of estimating the arterial input function, as needed in some techniques.

  11. Analysis of explosion in enclosure based on improved method of images

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Guo, J.; Yao, X.; Chen, G.; Zhu, X.

    2017-03-01

    The aim of this paper is to present an improved method to calculate the pressure loading on walls during a confined explosion. When an explosion occurs inside of an enclosure, reflected shock waves produce multiple pressure peaks at a given wall location, especially at the corners. The effects of confined blast loading may bring about more serious damage to the structure due to multiple shock reflection. An approach, first proposed by Chan to describe the track of shock waves based on the mirror reflecting theory, using the method of images (MOI) is proposed to simplify internal explosion loading calculations. An improved method of images is proposed that takes into account wall openings and oblique reflections that cannot be considered with the standard MOI. The approach, validated using experimental data, provides a simplified and quick approach for loading calculation of a confined explosion. The results show that the peak overpressure tends to decline as the measurement point moves away from the center, and increases sharply as it approaches the enclosure corners. The specific impulse increases from the center to the corners. The improved method is capable of predicting pressure-time history and impulse with an accuracy comparable to that of three-dimensional AUTODYN code predictions.

  12. An improved method for liver diseases detection by ultrasound image analysis.

    PubMed

    Owjimehr, Mehri; Danyali, Habibollah; Helfroush, Mohammad Sadegh

    2015-01-01

    Ultrasound imaging is a popular and noninvasive tool frequently used in the diagnoses of liver diseases. A system to characterize normal, fatty and heterogeneous liver, using textural analysis of liver Ultrasound images, is proposed in this paper. The proposed approach is able to select the optimum regions of interest of the liver images. These optimum regions of interests are analyzed by two level wavelet packet transform to extract some statistical features, namely, median, standard deviation, and interquartile range. Discrimination between heterogeneous, fatty and normal livers is performed in a hierarchical approach in the classification stage. This stage, first, classifies focal and diffused livers and then distinguishes between fatty and normal ones. Support vector machine and k-nearest neighbor classifiers have been used to classify the images into three groups, and their performance is compared. The Support vector machine classifier outperformed the compared classifier, attaining an overall accuracy of 97.9%, with a sensitivity of 100%, 100% and 95.1% for the heterogeneous, fatty and normal class, respectively. The Acc obtained by the proposed computer-aided diagnostic system is quite promising and suggests that the proposed system can be used in a clinical environment to support radiologists and experts in liver diseases interpretation.

  13. Improved quality control of silicon wafers using novel off-line air pocket image analysis

    NASA Astrophysics Data System (ADS)

    Valley, John F.; Sanna, M. Cristina

    2014-08-01

    Air pockets (APK) occur randomly in Czochralski (Cz) grown silicon (Si) crystals and may become included in wafers after slicing and polishing. Previously the only APK of interest were those that intersected the front surface of the wafer and therefore directly impacted device yield. However mobile and other electronics have placed new demands on wafers to be internally APK-free for reasons of thermal management and packaging yield. We present a novel, recently patented, APK image processing technique and demonstrate the use of that technique, off-line, to improve quality control during wafer manufacturing.

  14. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  15. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  16. Laser speckle contrast analysis (LASCA) for blood flow visualization: improved image processing

    NASA Astrophysics Data System (ADS)

    Briers, J. D.; He, Xiao-Wei

    1998-06-01

    LASCA is a single-exposure, full-field technique for mapping flow velocities. The motion of the particles in a fluid flow causes fluctuations in the speckle patten produced when laser light is scattered by the particles. The frequency of these intensity fluctuations increases with increasing velocity. These intensity fluctuations blur the speckle pattern and hence reduce its contrast. With a suitable integration time for the exposure, velocity can be mapped as speckle contrast. The equipment required is very simple. A CCD camera and a framegrabber capture an image of the area of interest. The local speckle contrast is computed and used to produce a false-color map of velocities. LASCA can be used to map capillary blood flow. The results are similar to those obtained by the scanning laser Doppler technique, but are obtained without the need to scan. This reduces the time needed for capturing the image from several minutes to a fraction of a second, already a clinical advantage. Until recently, however, processing the captured image did take several minutes. Improvements in the software have now reduced the processing tome to one second, thus providing a truly real-time method for obtaining a map of capillary blood flow.

  17. Jeffries Matusita based mixed-measure for improved spectral matching in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Padma, S.; Sanjeevi, S.

    2014-10-01

    This paper proposes a novel hyperspectral matching technique by integrating the Jeffries-Matusita measure (JM) and the Spectral Angle Mapper (SAM) algorithm. The deterministic Spectral Angle Mapper and stochastic Jeffries-Matusita measure are orthogonally projected using the sine and tangent functions to increase their spectral ability. The developed JM-SAM algorithm is implemented in effectively discriminating the landcover classes and cover types in the hyperspectral images acquired by PROBA/CHRIS and EO-1 Hyperion sensors. The reference spectra for different land-cover classes were derived from each of these images. The performance of the proposed measure is compared with the performance of the individual SAM and JM approaches. From the values of the relative spectral discriminatory probability (RSDPB) and relative discriminatory entropy value (RSDE), it is inferred that the hybrid JM-SAM approach results in a high spectral discriminability than the SAM and JM measures. Besides, the use of the improved JM-SAM algorithm for supervised classification of the images results in 92.9% and 91.47% accuracy compared to 73.13%, 79.41%, and 85.69% of minimum-distance, SAM and JM measures. It is also inferred that the increased spectral discriminability of JM-SAM measure is contributed by the JM distance. Further, it is seen that the proposed JM-SAM measure is compatible with varying spectral resolutions of PROBA/CHRIS (62 bands) and Hyperion (242 bands).

  18. Improved texture analysis for automatic detection of tuberculosis (TB) on chest radiographs with bone suppression images

    NASA Astrophysics Data System (ADS)

    Maduskar, Pragnya; Hogeweg, Laurens; Philipsen, Rick; Schalekamp, Steven; van Ginneken, Bram

    2013-03-01

    Computer aided detection (CAD) of tuberculosis (TB) on chest radiographs (CXR) is challenging due to over-lapping structures. Suppression of normal structures can reduce overprojection effects and can enhance the appearance of diffuse parenchymal abnormalities. In this work, we compare two CAD systems to detect textural abnormalities in chest radiographs of TB suspects. One CAD system was trained and tested on the original CXR and the other CAD system was trained and tested on bone suppression images (BSI). BSI were created using a commercially available software (ClearRead 2.4, Riverain Medical). The CAD system is trained with 431 normal and 434 abnormal images with manually outlined abnormal regions. Subtlety rating (1-3) is assigned to each abnormal region, where 3 refers to obvious and 1 refers to subtle abnormalities. Performance is evaluated on normal and abnormal regions from an independent dataset of 900 images. These contain in total 454 normal and 1127 abnormal regions, which are divided into 3 subtlety categories containing 280, 527 and 320 abnormal regions, respectively. For normal regions, original/BSI CAD has an average abnormality score of 0.094+/-0.027/0.085+/-0.032 (p - 5.6×10-19). For abnormal regions, subtlety 1, 2, 3 categories have average abnormality scores for original/BSI of 0.155+/-0.073/0.156+/-0.089 (p = 0.73), 0.194+/-0.086/0.207+/-0.101 (p = 5.7×10-7), 0.225+/-0.119/0.247+/-0.117 (p = 4.4×10-7), respectively. Thus for normal regions, CAD scores slightly decrease when using BSI instead of the original images, and for abnormal regions, the scores increase slightly. We therefore conclude that the use of bone suppression results in slightly but significantly improved automated detection of textural abnormalities in chest radiographs.

  19. An Improved Method for Measuring Quantitative Resistance to the Wheat Pathogen Zymoseptoria tritici Using High-Throughput Automated Image Analysis.

    PubMed

    Stewart, Ethan L; Hagerty, Christina H; Mikaberidze, Alexey; Mundt, Christopher C; Zhong, Ziming; McDonald, Bruce A

    2016-07-01

    Zymoseptoria tritici causes Septoria tritici blotch (STB) on wheat. An improved method of quantifying STB symptoms was developed based on automated analysis of diseased leaf images made using a flatbed scanner. Naturally infected leaves (n = 949) sampled from fungicide-treated field plots comprising 39 wheat cultivars grown in Switzerland and 9 recombinant inbred lines (RIL) grown in Oregon were included in these analyses. Measures of quantitative resistance were percent leaf area covered by lesions, pycnidia size and gray value, and pycnidia density per leaf and lesion. These measures were obtained automatically with a batch-processing macro utilizing the image-processing software ImageJ. All phenotypes in both locations showed a continuous distribution, as expected for a quantitative trait. The trait distributions at both sites were largely overlapping even though the field and host environments were quite different. Cultivars and RILs could be assigned to two or more statistically different groups for each measured phenotype. Traditional visual assessments of field resistance were highly correlated with quantitative resistance measures based on image analysis for the Oregon RILs. These results show that automated image analysis provides a promising tool for assessing quantitative resistance to Z. tritici under field conditions.

  20. Improvement in Ventilation-Perfusion Mismatch after Bronchoscopic Lung Volume Reduction: Quantitative Image Analysis.

    PubMed

    Lee, Sei Won; Lee, Sang Min; Shin, So Youn; Park, Tai Sun; Oh, Sang Young; Kim, Namkug; Hong, Yoonki; Lee, Jae Seung; Oh, Yeon-Mok; Lee, Sang-Do; Seo, Joon Beom

    2017-10-01

    Purpose To evaluate whether bronchoscopic lung volume reduction (BLVR) increases ventilation and therefore improves ventilation-perfusion (V/Q) mismatch. Materials and Methods All patients provided written informed consent to be included in this study, which was approved by the Institutional Review Board (2013-0368) of Asan Medical Center. The physiologic changes that occurred after BLVR were measured by using xenon-enhanced ventilation and iodine-enhanced perfusion dual-energy computed tomography (CT). Patients with severe emphysema plus hyperinflation who did not respond to usual treatments were eligible. Pulmonary function tests, the 6-minute walking distance (6MWD) test, quality of life assessment, and dual-energy CT were performed at baseline and 3 months after BLVR. The effect of BLVR was assessed with repeated-measures analysis of variance. Results Twenty-one patients were enrolled in this study (median age, 68 years; mean forced expiratory volume in 1 second [FEV1], 0.75 L ± 0.29). After BLVR, FEV1 (P < .001) and 6MWD (P = .002) improved significantly. Despite the reduction in lung volume (-0.39 L ± 0.44), both ventilation per voxel (P < .001) and total ventilation (P = .01) improved after BLVR. However, neither perfusion per voxel (P = .16) nor total perfusion changed significantly (P = .49). Patients with lung volume reduction of 50% or greater had significantly better improvement in FEV1 (P = .02) and ventilation per voxel (P = .03) than patients with lung volume reduction of less than 50%. V/Q mismatch also improved after BLVR (P = .005), mainly owing to the improvement in ventilation. Conclusion The dual-energy CT analyses showed that BLVR improved ventilation and V/Q mismatch. This increased lung efficiency may be the primary mechanism of improvement after BLVR, despite the reduction in lung volume. (©) RSNA, 2017 Online supplemental material is available for this article.

  1. Improvement of empirical OPC model robustness using full-chip aerial image analysis

    NASA Astrophysics Data System (ADS)

    Roessler, Thomas; Frankowsky, Beate; Toublan, Olivier

    2003-12-01

    With advanced CMOS technologies, model-based optical proximity correction (OPC) has become the most important aspect of post-tape-out data preparation for critical mask levels. While fabrication processes certainly remain the foundation of a qualified technology, the quality of OPC is increasingly moving into the focus of efforts to further improve yield. For a typical model-based OPC tool, the full OPC model consists of two distinct parts: (1) An aerial image part, based on a few, well-defined optical parameters of the lithography tool to describe the light intensity distribution in air at the wafer level and (2) an empirical part to model all other aspects of the pattern transfer, based on different black box modeling techniques such as kernel convolution or variable threshold modeling. Most importantly, the parameters for the empirical part are usually determined by fitting the model to proximity data measured from test structures. As a consequence, the robustness of the full OPC model for productive usage correlates directly with the extent to which these test structures provide a representative sampling of the circumstances encountered in an actual product layout. In order to determine the quality of this sampling, full-chip aerial image analyses are performed for various mask levels of a product design. A comparison of the characteristics of the light intensity distributions of this design with the corresponding information obtained from the test structures reveals configurations that are not well covered by the latter. This insight allows the definition of suitable additional test structures in order to improve the robustness of subsequent empirical OPC models.

  2. Temporal change analysis for improved tumor detection in dedicated CT breast imaging using affine and free-form deformation

    NASA Astrophysics Data System (ADS)

    Dey, Joyoni; O'Connor, J. Michael; Chen, Yu; Glick, Stephen J.

    2008-03-01

    Preliminary evidence has suggested that computerized tomographic (CT) imaging of the breast using a cone-beam, flat-panel detector system dedicated solely to breast imaging has potential for improving detection and diagnosis of early-stage breast cancer. Hypothetically, a powerful mechanism for assisting in early stage breast cancer detection from annual screening breast CT studies would be to examine temporal changes in the breast from year-to-year. We hypothesize that 3D image registration could be used to automatically register breast CT volumes scanned at different times (e.g., yearly screening exams). This would allow radiologists to quickly visualize small changes in the breast that have developed during the period since the last screening CT scan, and use this information to improve the diagnostic accuracy of early-stage breast cancer detection. To test our hypothesis, fresh mastectomy specimens were imaged with a flat-panel CT system at different time points, after moving the specimen to emulate the re-positioning motion of the breast between yearly screening exams. Synthetic tumors were then digitally inserted into the second CT scan at a clinically realistic location (to emulate tumor growth from year-to-year). An affine and a spline-based 3D image registration algorithm was implemented and applied to the CT reconstructions of the specimens acquired at different times. Subtraction of registered image volumes was then performed to better analyze temporal change. Results from this study suggests that temporal change analysis in 3D breast CT can potentially be a powerful tool in improving the visualization of small lesion growth.

  3. Contrast-ultrasound dispersion imaging for prostate cancer localization by improved spatiotemporal similarity analysis.

    PubMed

    Kuenen, M P J; Saidov, T A; Wijkstra, H; Mischi, M

    2013-09-01

    Angiogenesis plays a major role in prostate cancer growth. Despite extensive research on blood perfusion imaging aimed at angiogenesis detection, the diagnosis of prostate cancer still requires systematic biopsies. This may be due to the complex relationship between angiogenesis and microvascular perfusion. Analysis of ultrasound-contrast-agent dispersion kinetics, determined by multipath trajectories in the microcirculation, may provide better characterization of the microvascular architecture. We propose the physical rationale for dispersion estimation by an existing spatiotemporal similarity analysis. After an intravenous ultrasound-contrast-agent bolus injection, dispersion is estimated by coherence analysis among time-intensity curves measured at neighbor pixels. The accuracy of the method is increased by time-domain windowing and anisotropic spatial filtering for speckle regularization. The results in 12 patient data sets indicated superior agreement with histology (receiver operating characteristic curve area = 0.88) compared with those obtained by reported perfusion and dispersion analyses, providing a valuable contribution to prostate cancer localization. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  4. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects.

    PubMed

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-21

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  5. Improved spatial regression analysis of diffusion tensor imaging for lesion detection during longitudinal progression of multiple sclerosis in individual subjects

    NASA Astrophysics Data System (ADS)

    Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui

    2016-03-01

    Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.

  6. Rapid Prototyping of Hyperspectral Image Analysis Algorithms for Improved Invasive Species Decision Support Tools

    NASA Astrophysics Data System (ADS)

    Bruce, L. M.; Ball, J. E.; Evangilista, P.; Stohlgren, T. J.

    2006-12-01

    Nonnative invasive species adversely impact ecosystems, causing loss of native plant diversity, species extinction, and impairment of wildlife habitats. As a result, over the past decade federal and state agencies and nongovernmental organizations have begun to work more closely together to address the management of invasive species. In 2005, approximately 500M dollars was budgeted by U.S. Federal Agencies for the management of invasive species. Despite extensive expenditures, most of the methods used to detect and quantify the distribution of these invaders are ad hoc, at best. Likewise, decisions on the type of management techniques to be used or evaluation of the success of these methods are typically non-systematic. More efficient methods to detect or predict the occurrence of these species, as well as the incorporation of this knowledge into decision support systems, are greatly needed. In this project, rapid prototyping capabilities (RPC) are utilized for an invasive species application. More precisely, our recently developed analysis techniques for hyperspectral imagery are being prototyped for inclusion in the national Invasive Species Forecasting System (ISFS). The current ecological forecasting tools in ISFS will be compared to our hyperspectral-based invasives prediction algorithms to determine if/how the newer algorithms enhance the performance of ISFS. The PIs have researched the use of remotely sensed multispectral and hyperspectral reflectance data for the detection of invasive vegetative species. As a result, the PI has designed, implemented, and benchmarked various target detection systems that utilize remotely sensed data. These systems have been designed to make decisions based on a variety of remotely sensed data, including high spectral/spatial resolution hyperspectral signatures (1000's of spectral bands, such as those measured using ASD handheld devices), moderate spectral/spatial resolution hyperspectral images (100's of spectral bands, such

  7. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method.

  8. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  9. Novel analysis for improved validity in semi-quantitative 2-deoxyglucose autoradiographic imaging.

    PubMed

    Dawson, Neil; Ferrington, Linda; Olverman, Henry J; Kelly, Paul A T

    2008-10-30

    The original [(14)C]-2-deoxyglucose autoradiographic imaging technique allows for the quantitative determination of local cerebral glucose utilisation (LCMRglu) [Sokoloff L, Reivich, M, Kennedy C, Desrosiers M, Patlak C, Pettigrew K, et al. The 2-deoxyglucose-C-14 method for measurement of local cerebral glucose utilisation-theory, procedure and normal values in conscious and anestherized albino rats. J Neurochem 1977;28:897-916]. The range of applications to which the quantitative method can be readily applied is limited, however, by the requirement for the intermittent measurement of arterial radiotracer and glucose concentrations throughout the experiment, via intravascular cannulation. Some studies have applied a modified, semi-quantitative approach to estimate LCMRglu while circumventing the requirement for intravascular cannulation [Kelly S, Bieneman A, Uney J, McCulloch J. Cerebral glucose utilization in transgenic mice over-expressing heat shock protein 70 is altered by dizocilpine. Eur J Neurosci 2002;15(6):945-52; Jordan GR, McCulloch J, Shahid M, Hill DR, Henry B, Horsburgh K. Regionally selective and dose-dependent effects of the ampakines Org 26576 and Org 24448 on local cerebral glucose utilisation in the mouse as assessed by C-14-2-deoxyglucose autoradiography. Neuropharmacology 2005;49(2):254-64]. In this method only a terminal blood sample is collected for the determination of plasma [(14)C] and [glucose] and the rate of LCMRglu in each brain region of interest (RoI) is estimated by comparing the [(14)C] concentration in each region relative to a selected control region, which is proposed to demonstrate metabolic stability between the experimental groups. Here we show that the semi-quantitative method has reduced validity in the measurement of LCMRglu as compared to the quantitative method and that the validity of this technique is further compromised by the inability of the methods applied within the analysis to appropriately determine metabolic

  10. Computerized multiple image analysis on mammograms: performance improvement of nipple identification for registration of multiple views using texture convergence analyses

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana

    2004-05-01

    Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be

  11. Application of principal component analysis for improvement of X-ray fluorescence images obtained by polycapillary-based micro-XRF technique

    NASA Astrophysics Data System (ADS)

    Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.

    2017-07-01

    Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.

  12. Improvement of CAT scanned images

    NASA Technical Reports Server (NTRS)

    Roberts, E., Jr.

    1980-01-01

    Digital enhancement procedure improves definition of images. Tomogram is generated from large number of X-ray beams. Beams are collimated and small in diameter. Scanning device passes beams sequentially through human subject at many different angles. Battery of transducers opposite subject senses attenuated signals. Signals are transmitted to computer where they are used in construction of image on transverse plane through body.

  13. Improvement of CAT scanned images

    NASA Technical Reports Server (NTRS)

    Roberts, E., Jr.

    1980-01-01

    Digital enhancement procedure improves definition of images. Tomogram is generated from large number of X-ray beams. Beams are collimated and small in diameter. Scanning device passes beams sequentially through human subject at many different angles. Battery of transducers opposite subject senses attenuated signals. Signals are transmitted to computer where they are used in construction of image on transverse plane through body.

  14. Improvements in the analysis of diffraction phenomena by means of digital images

    NASA Astrophysics Data System (ADS)

    Ramil, A.; López, A. J.; Vincitorio, F.

    2007-11-01

    We introduce a simple methodology employing digital photography and image processing techniques to do a quantitative study of diffraction. To enhance the range of intensities recorded by a CCD camera without saturation, digital images of the diffraction patterns taken at different exposure times are combined pixel by pixel, and the measured values of the light intensity are fitted to theoretical curves. Diffraction by a single slit, a double slit, and a circular hole were analyzed to obtain quantitative results and demonstrate that the methodology is suitable for student laboratories.

  15. Improved defect analysis of Gallium Arsenide solar cells using image enhancement

    NASA Technical Reports Server (NTRS)

    Kilmer, Louis C.; Honsberg, Christiana; Barnett, Allen M.; Phillips, James E.

    1989-01-01

    A new technique has been developed to capture, digitize, and enhance the image of light emission from a forward biased direct bandgap solar cell. Since the forward biased light emission from a direct bandgap solar cell has been shown to display both qualitative and quantitative information about the solar cell's performance and its defects, signal processing techniques can be applied to the light emission images to identify and analyze shunt diodes. Shunt diodes are of particular importance because they have been found to be the type of defect which is likely to cause failure in a GaAs solar cell. The presence of a shunt diode can be detected from the light emission by using a photodetector to measure the quantity of light emitted at various current densities. However, to analyze how the shunt diodes affect the quality of the solar cell the pattern of the light emission must be studied. With the use of image enhancement routines, the light emission can be studied at low light emission levels where shunt diode effects are dominant.

  16. Improved Digital Image Correlation method

    NASA Astrophysics Data System (ADS)

    Mudassar, Asloob Ahmad; Butt, Saira

    2016-12-01

    Digital Image Correlation (DIC) is a powerful technique which is used to correlate two image segments to determine the similarity between them. A correlation image is formed which gives a peak known as correlation peak. If the two image segments are identical the peak is known as auto-correlation peak otherwise it is known as cross correlation peak. The location of the peak in a correlation image gives the relative displacement between the two image segments. Use of DIC for in-plane displacement and deformation measurements in Electronic Speckle Photography (ESP) is well known. In ESP two speckle images are correlated using DIC and relative displacement is measured. We are presenting background review of ESP and disclosing a technique based on DIC for improved relative measurements which we regard as the improved DIC method. Simulation and experimental results reveal that the proposed improved-DIC method is superior to the conventional DIC method in two aspects, in resolution and in the availability of reference position in displacement measurements.

  17. Improved structures of maximally decimated directional filter banks for spatial image analysis.

    PubMed

    Park, Sang-Il; Smith, Mark J T; Mersereau, Russell M

    2004-11-01

    This paper introduces an improved structure for directional filter banks (DFBs) that preserves the visual information in the subband domain. The new structure achieves this outcome while preserving both the efficient polyphase implementation and the exact reconstruction property. The paper outlines a step-by-step framework in which to examine the DFB, and within this framework discusses how, through the insertion of post-sampling matrices, visual distortions can be removed. In addition to the efficient tree structure, attention is given to the form and design of efficient linear phase filters. Most notably, linear phase IIR prototype filters are presented, together with the design details. These filters can enable the DFB to have more than a three-fold improvement in complexity reduction over quadrature mirror filters (QMFs).

  18. Using an Image Fusion Methodology to Improve Efficiency and Traceability of Posterior Pole Vessel Analysis by ROPtool.

    PubMed

    Prakalapakorn, Sasapin G; Vickers, Laura A; Estrada, Rolando; Freedman, Sharon F; Tomasi, Carlo; Farsiu, Sina; Wallace, David K

    2017-01-01

    The diagnosis of plus disease in retinopathy of prematurity (ROP) largely determines the need for treatment; however, this diagnosis is subjective. To make the diagnosis of plus disease more objective, semi-automated computer programs (e.g. ROPtool) have been created to quantify vascular dilation and tortuosity. ROPtool can accurately analyze blood vessels only in images with very good quality, but many still images captured by indirect ophthalmoscopy have insufficient image quality for ROPtool analysis. To evaluate the ability of an image fusion methodology (robust mosaicing) to increase the efficiency and traceability of posterior pole vessel analysis by ROPtool. We retrospectively reviewed video indirect ophthalmoscopy images acquired during routine ROP examinations and selected the best unenhanced still image from the video for each infant. Robust mosaicing was used to create an enhanced mosaic image from the same video for each eye. We evaluated the time required for ROPtool analysis as well as ROPtool's ability to analyze vessels in enhanced vs. unenhanced images. We included 39 eyes of 39 infants. ROPtool analysis was faster (125 vs. 152 seconds; p=0.02) in enhanced vs. unenhanced images, respectively. ROPtool was able to trace retinal vessels in more quadrants (143/156, 92% vs 115/156, 74%; p=0.16) in enhanced mosaic vs. unenhanced still images, respectively and in more overall (38/39, 97% vs. 34/39, 87%; p=0.07) enhanced mosaic vs. unenhanced still images, respectively. Retinal image enhancement using robust mosaicing advances efforts to automate grading of posterior pole disease in ROP.

  19. Applying Chemical Imaging Analysis to Improve Our Understanding of Cold Cloud Formation

    NASA Astrophysics Data System (ADS)

    Laskin, A.; Knopf, D. A.; Wang, B.; Alpert, P. A.; Roedel, T.; Gilles, M. K.; Moffet, R.; Tivanski, A.

    2012-12-01

    The impact that atmospheric ice nucleation has on the global radiation budget is one of the least understood problems in atmospheric sciences. This is in part due to the incomplete understanding of various ice nucleation pathways that lead to ice crystal formation from pre-existing aerosol particles. Studies investigating the ice nucleation propensity of laboratory generated particles indicate that individual particle types are highly selective in their ice nucleating efficiency. This description of heterogeneous ice nucleation would present a challenge when applying to the atmosphere which contains a complex mixture of particles. Here, we employ a combination of micro-spectroscopic and optical single particle analytical methods to relate particle physical and chemical properties with observed water uptake and ice nucleation. Field-collected particles from urban environments impacted by anthropogenic and marine emissions and aging processes are investigated. Single particle characterization is provided by computer controlled scanning electron microscopy with energy dispersive analysis of X-rays (CCSEM/EDX) and scanning transmission X-ray microscopy with near edge X-ray absorption fine structure spectroscopy (STXM/NEXAFS). A particle-on-substrate approach coupled to a vapor controlled cooling-stage and a microscope system is applied to determine the onsets of water uptake and ice nucleation including immersion freezing and deposition ice nucleation as a function of temperature (T) as low as 200 K and relative humidity (RH) up to water saturation. We observe for urban aerosol particles that for T > 230 K the oxidation level affects initial water uptake and that subsequent immersion freezing depends on particle mixing state, e.g. by the presence of insoluble particles. For T < 230 K the particles initiate deposition ice nucleation well below the homogeneous freezing limit. Particles collected throughout one day for similar meteorological conditions show very similar

  20. Improving image segmentation performance and quantitative analysis via a computer-aided grading methodology for optical coherence tomography retinal image analysis

    PubMed Central

    Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.

    2010-01-01

    We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment∕outer segment (IS∕OS) junction and outer segment∕retinal pigment epithelium (OS∕RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features. PMID:20799817

  1. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  2. Phenotype analysis of early risk factors from electronic medical records improves image-derived diagnostic classifiers for optic nerve pathology

    NASA Astrophysics Data System (ADS)

    Chaganti, Shikha; Nabar, Kunal P.; Nelson, Katrina M.; Mawn, Louise A.; Landman, Bennett A.

    2017-03-01

    We examine imaging and electronic medical records (EMR) of 588 subjects over five major disease groups that affect optic nerve function. An objective evaluation of the role of imaging and EMR data in diagnosis of these conditions would improve understanding of these diseases and help in early intervention. We developed an automated image processing pipeline that identifies the orbital structures within the human eyes from computed tomography (CT) scans, calculates structural size, and performs volume measurements. We customized the EMR-based phenome-wide association study (PheWAS) to derive diagnostic EMR phenotypes that occur at least two years prior to the onset of the conditions of interest from a separate cohort of 28,411 ophthalmology patients. We used random forest classifiers to evaluate the predictive power of image-derived markers, EMR phenotypes, and clinical visual assessments in identifying disease cohorts from a control group of 763 patients without optic nerve disease. Image-derived markers showed more predictive power than clinical visual assessments or EMR phenotypes. However, the addition of EMR phenotypes to the imaging markers improves the classification accuracy against controls: the AUC improved from 0.67 to 0.88 for glaucoma, 0.73 to 0.78 for intrinsic optic nerve disease, 0.72 to 0.76 for optic nerve edema, 0.72 to 0.77 for orbital inflammation, and 0.81 to 0.85 for thyroid eye disease. This study illustrates the importance of diagnostic context for interpretation of image-derived markers and the proposed PheWAS technique provides a flexible approach for learning salient features of patient history and incorporating these data into traditional machine learning analyses.

  3. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  4. Signal-to-noise ratio improvement in dynamic contrast-enhanced CT and MR imaging with automated principal component analysis filtering.

    PubMed

    Balvay, Daniel; Kachenoura, Nadjia; Espinoza, Sophie; Thomassin-Naggara, Isabelle; Fournier, Laure S; Clement, Olivier; Cuenod, Charles-André

    2011-02-01

    To develop a new automated filtering technique and to evaluate its ability to compensate for the known low contrast-to-noise ratio (CNR) in dynamic contrast material-enhanced (DCE) magnetic resonance (MR) and computed tomographic (CT) data, without substantial loss of information. Clinical data acquisition for this study was approved by the institutional review board. Principal component analysis (PCA) was combined with the fraction of residual information (FRI) criterion to optimize the balance between noise reduction efficiency and information conservation. The PCA FRI filter was evaluated in 15 DCE MR imaging data sets and 15 DCE CT data sets by two radiologists who performed visual analysis and quantitative assessment of noise reduction after filtering. Visual evaluation revealed a substantial noise reduction while conserving information in 90% of MR imaging cases and 87% of CT cases for image analysis and in 93% of MR imaging cases and 90% of CT cases for signal analysis. Efficient denoising enabled improvement in structure characterization in 60% of MR imaging cases and 77% of CT cases. After filtering, CNR was improved by 2.06 ± 0.89 for MR imaging (P < .01) and by 5.72 ± 4.82 for CT (P < .01). This PCA FRI filter demonstrates noise reduction efficiency and information conservation for both DCE MR data and DCE CT data. FRI analysis enabled automated optimization of the parameters for the PCA filter and provided an optional visual control of residual information losses. The robust and fast PCA FRI filter may improve qualitative or quantitative analysis of DCE imaging in a clinical context. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10100231/-/DC1. © RSNA, 2010

  5. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  6. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.

    PubMed

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-12-01

    Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.

  7. Improving Technology for Vascular Imaging

    NASA Astrophysics Data System (ADS)

    Rana, Raman

    Neuro-endovascular image guided interventions (Neuro-EIGIs) is a minimally invasive procedure that require micro catheters and endovascular devices be inserted into the vasculature via an incision near the femoral artery and guided under low dose fluoroscopy to the vasculature of the head and neck. However, the endovascular devices used for the purpose are of very small size (stents are of the order of 50mum to 100mum) and the success of these EIGIs depends a lot on the accurate placement of these devices. In order to accurately place these devices inside the patient, the interventionalist should be able to see them clearly. Hence, high resolution capabilities are of immense importance in neuro-EIGIs. The high-resolution detectors, MAF-CCD and MAF-CMOS, at the Toshiba Stroke and Vascular Research Center at the University at Buffalo are capable of presenting improved images for better patient care. Focal spot of an x-ray tube plays an important role in performance of these high resolution detectors. The finite size of the focal spot results into the blurriness around the edges of the image of the object resulting in reduced spatial resolution. Hence, knowledge of accurate size of the focal spot of the x-ray tube is very essential for the evaluation of the total system performance. Importance of magnification and image detector blur deconvolution was demonstrated to carry out the more accurate measurement of x-ray focal spot using a pinhole camera. A 30 micron pinhole was used to obtain the focal spot images using flat panel detector (FPD) and different source to image distances (SIDs) were used to achieve different magnifications (3.16, 2.66 and 2.16). These focal spot images were deconvolved with a 2-D modulation transfer function (MTF), obtained using noise response (NR) method, to remove the detector blur present in the images. Using these corrected images, the accurate size of all the three focal spots were obtained and it was also established that effect of

  8. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  9. A Bayesian approach to image expansion for improved definition.

    PubMed

    Schultz, R R; Stevenson, R L

    1994-01-01

    Accurate image expansion is important in many areas of image analysis. Common methods of expansion, such as linear and spline techniques, tend to smooth the image data at edge regions. This paper introduces a method for nonlinear image expansion which preserves the discontinuities of the original image, producing an expanded image with improved definition. The maximum a posteriori (MAP) estimation techniques that are proposed for noise-free and noisy images result in the optimization of convex functionals. The expanded images produced from these methods will be shown to be aesthetically and quantitatively superior to images expanded by the standard methods of replication, linear interpolation, and cubic B-spline expansion.

  10. Endoscopy imaging intelligent contrast improvement.

    PubMed

    Sheraizin, S; Sheraizin, V

    2005-01-01

    In this paper, we present a medical endoscopy video contrast improvement method that provides intelligent automatic adaptive contrast control. The method fundamentals are video data clustering and video data histogram modification. The video data clustering allows an effective use the low noise two channel contrast enhancement processing. The histogram analysis permitted to determine the video exposure type for simple and complicated contrast distribution. We determined the needed gamma value for automatic local area contrast improvement for the following exposure types: dark, normal, light, dark light, dark normal etc. The experimental results of medical endoscopy video processing allow defining the automatic gamma control range from 0.5 to 2.0.

  11. Forensic video image analysis

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas R.

    1997-02-01

    Forensic video image analysis is a new scientific tool for perpetrator enhancement and identification in poorly recorded crime scene situations. Forensic video image analysis is emerging technology for law enforcement, industrial security and surveillance addressing the following problems often found in these poor quality video recorded incidences.

  12. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  13. Principal component analysis with pre-normalization improves the signal-to-noise ratio and image quality in positron emission tomography studies of amyloid deposits in Alzheimer's disease.

    PubMed

    Razifar, Pasha; Engler, Henry; Blomquist, Gunnar; Ringheim, Anna; Estrada, Sergio; Långström, Bengt; Bergström, Mats

    2009-06-07

    This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [(11)C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([(11)C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [(11)C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.

  14. Principal component analysis with pre-normalization improves the signal-to-noise ratio and image quality in positron emission tomography studies of amyloid deposits in Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Razifar, Pasha; Engler, Henry; Blomquist, Gunnar; Ringheim, Anna; Estrada, Sergio; Långström, Bengt; Bergström, Mats

    2009-06-01

    This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [11C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([11C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [11C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.

  15. Principles and clinical applications of image analysis.

    PubMed

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  16. Improved diagnostic performance of exercise thallium-201 single photon emission computed tomography over planar imaging in the diagnosis of coronary artery disease: a receiver operating characteristic analysis

    SciTech Connect

    Fintel, D.J.; Links, J.M.; Brinker, J.A.; Frank, T.L.; Parker, M.; Becker, L.C.

    1989-03-01

    Qualitative interpretation of tomographic and planar scintigrams, a five point rating scale and receiver operating characteristic analysis were utilized to compare single photon emission computed tomography and conventional planar imaging of myocardial thallium-201 uptake in the accuracy of the diagnosis of coronary artery disease and individual vessel involvement. One hundred twelve patients undergoing cardiac catheterization and 23 normal volunteers performed symptom-limited treadmill exercise, followed by stress and redistribution imaging by both tomographic and planar techniques, with the order determined randomly. Paired receiver operating characteristic curves revealed that single photon emission computed tomography was more accurate than planar imaging over the entire range of decision thresholds for the overall detection and exclusion of coronary artery disease and involvement of the left anterior descending and left circumflex coronary arteries. Tomography offered relatively greater advantages in male patients and in patients with milder forms of coronary artery disease, who had no prior myocardial infarction, only single vessel involvement or no lesion greater than or equal to 50 to 69%. Tomography did not appear to provide improved diagnosis in women or in detection of disease in the right coronary artery. Although overall detection of coronary artery disease was not improved in patients with prior myocardial infarction, tomography provided improved identification of normal and abnormal vascular regions. These results indicate that single photon emission computed tomography provides improved diagnostic performance compared with planar imaging in many clinical subgroups.

  17. Multisensor Image Analysis System

    DTIC Science & Technology

    1993-04-15

    AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald

  18. Suggestions for automatic quantitation of endoscopic image analysis to improve detection of small intestinal pathology in celiac disease patients.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-10-01

    Although many groups have attempted to develop an automated computerized method to detect pathology of the small intestinal mucosa caused by celiac disease, the efforts have thus far failed. This is due in part to the occult presence of the disease. When pathological evidence of celiac disease exists in the small bowel it is visually often patchy and subtle. Due to presence of extraneous substances such as air bubbles and opaque fluids, the use of computerized automation methods have only been partially successful in detecting the hallmarks of the disease in the small intestine-villous atrophy, fissuring, and a mottled appearance. By using a variety of computerized techniques and assigning a weight or vote to each technique, it is possible to improve the detection of abnormal regions which are indicative of celiac disease, and of treatment progress in diagnosed patients. Herein a paradigm is suggested for improving the efficacy of automated methods for measuring celiac disease manifestation in the small intestinal mucosa. The suggestions are applicable to both standard and videocapsule endoscopic imaging, since both methods could potentially benefit from computerized quantitation to improve celiac disease diagnosis.

  19. Improved radiographic image amplifier panel

    NASA Technical Reports Server (NTRS)

    Brown, R. L., Sr.

    1968-01-01

    Layered image amplifier for radiographic /X ray and gamma ray/ applications, combines very high radiation sensitivity with fast image buildup and erasure capabilities by adding a layer of material that is both photoconductive and light-emitting to basic image amplifier and cascading this assembly with a modified Thorne panel.

  20. Towards an Improved Algorithm for Estimating Freeze-Thaw Dates of a High Latitude Lake Using Texture Analysis of SAR Images

    NASA Astrophysics Data System (ADS)

    Uppuluri, A. V.; Jost, R. J.; Luecke, C.; White, M. A.

    2008-12-01

    Analyzing the freeze-thaw dates of high latitude lakes is an important part of climate change studies. Due to the various advantages provided by the use of SAR images, with respect to remote monitoring of small lakes, SAR image analysis is an obvious choice to estimate lake freeze-thaw dates. An important property of SAR images is its texture. The problem of estimating freeze-thaw dates can be restated as a problem of classifying an annual time series of SAR images based on the presence or absence of ice. We analyzed a few algorithms based on texture to improve the estimation of freeze-thaw dates for small lakes using SAR images. We computed the Gray Level Co-occurrence Matrix (GLCM) for each image and extracted ten different texture features from the GLCM. We used these texture features (namely, Energy, Contrast, Correlation, Homogeneity, Entropy, Autocorrelation, Dissimilarity, Cluster Shade, Cluster Prominence and Maximum Probability as previously used in studies related to the texture analysis of SAR sea ice imagery) as input to a group of classification algorithms to find the most accurate classifier and set of texture features that can help to decide the presence or absence of ice on the lake. The accuracy of the estimated freeze-thaw dates is dependent on the accuracy of the classifier. It is considered highly difficult to differentiate between open water (without wind) and the first day of ice formed on the lake (due to the similar mean backscatter values) causing inaccuracy in calculating the freeze date. Similar inaccuracy in calculating the thaw date arise due to the close backscatter values of water (with wind) and later stages of ice on the lake. Our method is promising but requires further research in improving the accuracy of the classifiers and selecting the input features.

  1. Multiresponse imaging system design for improved resolution

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1991-01-01

    Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/sq rt A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.

  2. Multiresponse imaging system design for improved resolution

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1991-01-01

    Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/sq rt A times the photodetector-array sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate-distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.

  3. Quantitative analysis of the improvement in high zoom maritime tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    Bachoo, Asheer K.; de Villiers, Jason P.; Nicolls, Fred; le Roux, Francois P. J.

    2011-05-01

    This work aims to evaluate the improvement in the performance of tracking small maritime targets due to real-time enhancement of the video streams from high zoom cameras on pan-tilt pedestal. Due to atmospheric conditions these images can frequently have poor contrast, or exposure of the target if it is far and thus small in the camera's field of view. A 300mm focal length lens and machine vision camera were mounted on a pan-tilt unit and used to observe the False Bay near Simon's Town, South Africa. A ground truth data-set was created by performing a least squares geo-alignment of the camera system and placing a differential global position system receiver on a target boat, thus allowing the boat's position in the camera's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. This allowed the 1.3 mega-pixel 20 frames per second video stream to be processed in real-time. A quantified measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained using data enhanced with the algorithms described.

  4. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. ImageX: new and improved image explorer for astronomical images and beyond

    NASA Astrophysics Data System (ADS)

    Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.

    2016-08-01

    The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan

  6. Tissue Doppler Imaging Combined with Advanced 12-Lead ECG Analysis Might Improve Early Diagnosis of Hypertrophic Cardiomyopathy in Childhood

    NASA Technical Reports Server (NTRS)

    Femlund, E.; Schlegel, T.; Liuba, P.

    2011-01-01

    Optimization of early diagnosis of childhood hypertrophic cardiomyopathy (HCM) is essential in lowering the risk of HCM complications. Standard echocardiography (ECHO) has shown to be less sensitive in this regard. In this study, we sought to assess whether spatial QRS-T angle deviation, which has shown to predict HCM in adults with high sensitivity, and myocardial Tissue Doppler Imaging (TDI) could be additional tools in early diagnosis of HCM in childhood. Methods: Children and adolescents with familial HCM (n=10, median age 16, range 5-27 years), and without obvious hypertrophy but with heredity for HCM (n=12, median age 16, range 4-25 years, HCM or sudden death with autopsy-verified HCM in greater than or equal to 1 first-degree relative, HCM-risk) were additionally investigated with TDI and advanced 12-lead ECG analysis using Cardiax(Registered trademark) (IMED Co Ltd, Budapest, Hungary and Houston). Spatial QRS-T angle (SA) was derived from Kors regression-related transformation. Healthy age-matched controls (n=21) were also studied. All participants underwent thorough clinical examination. Results: Spatial QRS-T angle (Figure/ Panel A) and septal E/Ea ratio (Figure/Panel B) were most increased in HCM group as compared to the HCM-risk and control groups (p less than 0.05). Of note, these 2 variables showed a trend toward higher levels in HCM-risk group than in control group (p=0.05 for E/Ea and 0.06 for QRS/T by ANOVA). In a logistic regression model, increased SA and septal E/Ea ratio appeared to significantly predict both the disease (Chi-square in HCM group: 9 and 5, respectively, p less than 0.05 for both) and the risk for HCM (Chi-square in HCM-risk group: 5 and 4 respectively, p less than 0.05 for both), with further increased predictability level when these 2 variables were combined (Chi-square 10 in HCM group, and 7 in HCM-risk group, p less than 0.01 for both). Conclusions: In this small material, Tissue Doppler Imaging and spatial mean QRS-T angle

  7. Polarization image fusion algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Zhang, Siyuan; Yuan, Yan; Su, Lijuan; Hu, Liang; Liu, Hui

    2013-12-01

    The polarization detection technique provides polarization information of objects which conventional detection techniques are unable to obtain. In order to fully utilize of obtained polarization information, various polarization imagery fusion algorithms have been developed. In this research, we proposed a polarization image fusion algorithm based on the improved pulse coupled neural network (PCNN). The improved PCNN algorithm uses polarization parameter images to generate the fused polarization image with object details for polarization information analysis and uses the matching degree M as the fusion rule. The improved PCNN fused image is compared with fused images based on Laplacian pyramid (LP) algorithm, Wavelet algorithm and PCNN algorithm. Several performance indicators are introduced to evaluate the fused images. The comparison showed the presented algorithm yields image with much higher quality and preserves more detail information of the objects.

  8. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  9. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  10. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  11. Equivocal cytology in lung cancer diagnosis: improvement of diagnostic accuracy using adjuvant multicolor FISH, DNA-image cytometry, and quantitative promoter hypermethylation analysis.

    PubMed

    Schramm, Martin; Wrobel, Christian; Born, Ingmar; Kazimirek, Marietta; Pomjanski, Natalia; William, Marina; Kappes, Rainer; Gerharz, Claus Dieter; Biesterfeld, Stefan; Böcking, Alfred

    2011-06-25

    Sometimes, cytological lung cancer diagnosis is challenging because equivocal diagnoses are common. To enhance diagnostic accuracy, fluorescent in situ hybridization (FISH), DNA-image cytometry, and quantitative promoter hypermethylation analysis have been proposed as adjuncts. Bronchial washings and/or brushings or transbronchial fine-needle aspiration biopsies were prospectively collected from patients who were clinically suspected of having lung carcinoma. After routine cytological diagnosis, 70 consecutive specimens, each cytologically diagnosed as negative, equivocal, or positive for cancer cells, were investigated with adjuvant methods. Suspicious areas on the smears were restained with the LAVysion multicolor FISH probe set (Abbott Molecular, Des Plaines, Illinois) or according to the Feulgen Staining Method for DNA-image cytometry analysis. DNA was extracted from residual liquid material, and frequencies of aberrant methylation of APC, p16(INK4A) , and RASSF1A gene promoters were determined with quantitative methylation-specific polymerase chain reaction (QMSP) after bisulfite conversion. Clinical and histological follow-up according to a reference standard, defined in advance, were available for 198 of 210 patients. In the whole cohort, cytology, FISH, DNA-image cytometry, and QMSP achieved sensitivities of 83.7%, 78%, 79%, and 49.6%, respectively (specificities of 69.8%, 98.2%, 98.2%, and 98.4%, respectively). Subsequent to cytologically equivocal diagnoses, FISH, DNA-image cytometry, and QMSP definitely identified malignancy in 79%, 83%, and 49%, respectively. With QMSP, 4 of 22 cancer patients with cytologically negative diagnoses were correctly identified. Thus, adjuvant FISH or DNA-image cytometry in cytologically equivocal diagnoses improves diagnostic accuracy at comparable rates. Adjuvant QMSP in cytologically negative cases with persistent suspicion of lung cancer would enhance sensitivity. Copyright © 2011 American Cancer Society.

  12. Improved Interactive Medical-Imaging System

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Twombly, Ian A.; Senger, Steven

    2003-01-01

    An improved computational-simulation system for interactive medical imaging has been invented. The system displays high-resolution, three-dimensional-appearing images of anatomical objects based on data acquired by such techniques as computed tomography (CT) and magnetic-resonance imaging (MRI). The system enables users to manipulate the data to obtain a variety of views for example, to display cross sections in specified planes or to rotate images about specified axes. Relative to prior such systems, this system offers enhanced capabilities for synthesizing images of surgical cuts and for collaboration by users at multiple, remote computing sites.

  13. Improved real-time imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Lambert, James L. (Inventor); Chao, Tien-Hsin (Inventor); Yu, Jeffrey W. (Inventor); Cheng, Li-Jen (Inventor)

    1993-01-01

    An improved AOTF-based imaging spectrometer that offers several advantages over prior art AOTF imaging spectrometers is presented. The ability to electronically set the bandpass wavelength provides observational flexibility. Various improvements in optical architecture provide simplified magnification variability, improved image resolution and light throughput efficiency and reduced sensitivity to ambient light. Two embodiments of the invention are: (1) operation in the visible/near-infrared domain of wavelength range 0.48 to 0.76 microns; and (2) infrared configuration which operates in the wavelength range of 1.2 to 2.5 microns.

  14. Advanced endoscopic imaging to improve adenoma detection

    PubMed Central

    Neumann, Helmut; Nägel, Andreas; Buda, Andrea

    2015-01-01

    Advanced endoscopic imaging is revolutionizing our way on how to diagnose and treat colorectal lesions. Within recent years a variety of modern endoscopic imaging techniques was introduced to improve adenoma detection rates. Those include high-definition imaging, dye-less chromoendoscopy techniques and novel, highly flexible endoscopes, some of them equipped with balloons or multiple lenses in order to improve adenoma detection rates. In this review we will focus on the newest developments in the field of colonoscopic imaging to improve adenoma detection rates. Described techniques include high-definition imaging, optical chromoendoscopy techniques, virtual chromoendoscopy techniques, the Third Eye Retroscope and other retroviewing devices, the G-EYE endoscope and the Full Spectrum Endoscopy-system. PMID:25789092

  15. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  16. Can coffee improve image guidance?

    NASA Astrophysics Data System (ADS)

    Wirz, Raul; Lathrop, Ray A.; Godage, Isuru S.; Burgner-Kahrs, Jessica; Russell, Paul T.; Webster, Robert J.

    2015-03-01

    Anecdotally, surgeons sometimes observe large errors when using image guidance in endonasal surgery. We hypothesize that one contributing factor is the possibility that operating room personnel might accidentally bump the optically tracked rigid body attached to the patient after registration has been performed. In this paper we explore the registration error at the skull base that can be induced by simulated bumping of the rigid body, and find that large errors can occur when simulated bumps are applied to the rigid body. To address this, we propose a new fixation method for the rigid body based on granular jamming (i.e. using particles like ground coffee). Our results show that our granular jamming fixation prototype reduces registration error by 28%-68% (depending on bump direction) in comparison to a standard Brainlab reference headband.

  17. Digital Image Analysis of Cereals

    USDA-ARS?s Scientific Manuscript database

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  18. Improvement of image quality in holographic microscopy.

    PubMed

    Budhiraja, C J; Som, S C

    1981-05-15

    A novel technique of noise reduction in holographic microscopy has been experimentally studied. It has been shown that significant improvement in the holomicroscopic images of actual low-contrast continuous tone biological objects can be achieved without trade off in image resolution. The technique makes use of holographically produced multidirectional phase gratings used as diffusers and the continuous addition of subchannel holograms. It has been shown that the self-imaging property of this type of diffuser makes the use of these diffusers ideal for microscopic objects. Experimental results have also been presented to demonstrate real-time image processing capability of this technique.

  19. An improved image reconstruction method for optical intensity correlation Imaging

    NASA Astrophysics Data System (ADS)

    Gao, Xin; Feng, Lingjie; Li, Xiyu

    2016-12-01

    The intensity correlation imaging method is a novel kind of interference imaging and it has favorable prospects in deep space recognition. However, restricted by the low detecting signal-to-noise ratio (SNR), it's usually very difficult to obtain high-quality image of deep space object like high-Earth-orbit (HEO) satellite with existing phase retrieval methods. In this paper, based on the priori intensity statistical distribution model of the object and characteristics of measurement noise distribution, an improved method of Prior Information Optimization (PIO) is proposed to reduce the ambiguous images and accelerate the phase retrieval procedure thus realizing fine image reconstruction. As the simulations and experiments show, compared to previous methods, our method could acquire higher-resolution images with less error in low SNR condition.

  20. Improvement of ultrasound speckle image velocimetry using image enhancement techniques.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Paeng, Dong-Guk; Lee, Sang Joon

    2014-01-01

    Ultrasound-based techniques have been developed and widely used in noninvasive measurement of blood velocity. Speckle image velocimetry (SIV), which applies a cross-correlation algorithm to consecutive B-mode images of blood flow has often been employed owing to its better spatial resolution compared with conventional Doppler-based measurement techniques. The SIV technique utilizes speckles backscattered from red blood cell (RBC) aggregates as flow tracers. Hence, the intensity and size of such speckles are highly dependent on hemodynamic conditions. The grayscale intensity of speckle images varies along the radial direction of blood vessels because of the shear rate dependence of RBC aggregation. This inhomogeneous distribution of echo speckles decreases the signal-to-noise ratio (SNR) of a cross-correlation analysis and produces spurious results. In the present study, image-enhancement techniques such as contrast-limited adaptive histogram equalization (CLAHE), min/max technique, and subtraction of background image (SB) method were applied to speckle images to achieve a more accurate SIV measurement. A mechanical sector ultrasound scanner was used to obtain ultrasound speckle images from rat blood under steady and pulsatile flows. The effects of the image-enhancement techniques on SIV analysis were evaluated by comparing image intensities, velocities, and cross-correlation maps. The velocity profiles and wall shear rate (WSR) obtained from RBC suspension images were compared with the analytical solution for validation. In addition, the image-enhancement techniques were applied to in vivo measurement of blood flow in human vein. The experimental results of both in vitro and in vivo SIV measurements show that the intensity gradient in heterogeneous speckles has substantial influence on the cross-correlation analysis. The image-enhancement techniques used in this study can minimize errors encountered in ultrasound SIV measurement in which RBCs are used as flow

  1. Ultrasonic image analysis and image-guided interventions

    PubMed Central

    Noble, J. Alison; Navab, Nassir; Becher, H.

    2011-01-01

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research. PMID:22866237

  2. Quantitative analysis of the improvement in omnidirectional maritime surveillance and tracking due to real-time image enhancement

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason P.; Bachoo, Asheer K.; Nicolls, Fred C.; le Roux, Francois P. J.

    2011-05-01

    Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is constant and the background is changing. A panoramic camera is able to model the entire scene, or background, and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing a differential global position system receiver on a small target boat thus allowing its position in the array's field of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These results are then compared to those obtained with the enhanced video data.

  3. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  4. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  5. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  6. Moving Image Analysis System

    NASA Astrophysics Data System (ADS)

    Shifley, Loren A.

    1989-02-01

    The recent introduction of a two dimensional interactive software package provides a new technique for quantitative analysis. Integrated with its corresponding peripherals, the same software offers either film or video data reduction. Digitized data points measured from the images are stored in the computer. With is data, a variety of information can be displayed, printed or plotted in a graphical form. The resultant graphs could determine such factors as: displacement, force, velocity, momentum, angular acceleration, center of gravity, energy, leng, , angle and time to name a few. Simple, efficient and precise analysis can now be quantified and documented. This paper will describe the detailed capabilities of the software along with a variety of applications where it might be used.

  7. Moving image analysis system

    NASA Astrophysics Data System (ADS)

    Shifley, Loren A.

    1990-08-01

    The recent introduction of a two dimensional interactive software package provides a new technique for quantitative analysis. Integrated with its corresponding peripherals, the same software offers either film or video data reduction. Digitized data points measured from the images are stored in the computer. With this data, a variety of information can be displayed, printed or plotted in a graphical form. The resultant graphs could determine such factors as: displacement, force, velocity, momentum, angular acceleration, center of gravity, energy, length, angle and time to name a few. Simple, efficient and precise analysis can now be quantified and documented. This paper will describe the detailed capabilities of the software along with a variety of applications where it might be used.

  8. Brain Imaging Analysis

    PubMed Central

    BOWMAN, F. DUBOIS

    2014-01-01

    The increasing availability of brain imaging technologies has led to intense neuroscientific inquiry into the human brain. Studies often investigate brain function related to emotion, cognition, language, memory, and numerous other externally induced stimuli as well as resting-state brain function. Studies also use brain imaging in an attempt to determine the functional or structural basis for psychiatric or neurological disorders and, with respect to brain function, to further examine the responses of these disorders to treatment. Neuroimaging is a highly interdisciplinary field, and statistics plays a critical role in establishing rigorous methods to extract information and to quantify evidence for formal inferences. Neuroimaging data present numerous challenges for statistical analysis, including the vast amounts of data collected from each individual and the complex temporal and spatial dependence present. We briefly provide background on various types of neuroimaging data and analysis objectives that are commonly targeted in the field. We present a survey of existing methods targeting these objectives and identify particular areas offering opportunities for future statistical contribution. PMID:25309940

  9. Differentiated analysis of orthodontic tooth movement in rats with an improved rat model and three-dimensional imaging.

    PubMed

    Kirschneck, Christian; Proff, Peter; Fanghaenel, Jochen; Behr, Michael; Wahlmann, Ulrich; Roemer, Piero

    2013-12-01

    Rat models currently available for analysis of orthodontic tooth movement often lack differentiated, reliable and precise measurement systems allowing researchers to separately investigate the individual contribution of tooth tipping, body translation and root torque to overall displacement. Many previously proposed models have serious limitations such as the rather inaccurate analysis of the effects of orthodontic forces on rat incisors. We therefore developed a differentiated measurement system that was used within a rat model with the aim of overcoming the limitations of previous studies. The first left upper molar and the upper incisors of 24 male Wistar rats were subjected to a constant orthodontic force of 0.25 N by means of a NiTi closed coil spring for up to four weeks. The extent of the various types of tooth movement was measured optometrically with a CCD microscope camera and cephalometrically by means of cone beam computed tomography (CBCT). Both types of measurement proved to be reliable for consecutive measurements and the significant tooth movement induced had no harmful effects on the animals. Movement kinetics corresponded to known physiological processes and tipping and body movement equally contributed to the tooth displacement. The upper incisors of the rats were significantly deformed and their natural eruption was effectively halted. The results showed that our proposed measurement systems used within a rat model resolved most of the inadequacies of previous studies. They are reliable, precise and physiological tools for the differentiated analysis of orthodontic tooth movement while simultaneously preserving animal welfare.

  10. An improved imaging method for extended targets

    NASA Astrophysics Data System (ADS)

    Hou, Songming; Zhang, Sui

    2017-03-01

    When we use direct imaging methods for solving inverse scattering problems, we observe artificial lines which make it hard to determine the shape of targets. We propose a signal space test to study the cause of the artificial lines and we use multiple frequency data to reduce the effect from them. Finally, we use the active contour without edges (ACWE) method to further improve the imaging results. The final reconstruction is accurate and robust. The computational cost is lower than iterative imaging methods which need to solve the forward problem in each iteration.

  11. Improving photoacoustic imaging contrast of brachytherapy seeds

    NASA Astrophysics Data System (ADS)

    Pan, Leo; Baghani, Ali; Rohling, Robert; Abolmaesumi, Purang; Salcudean, Septimiu; Tang, Shuo

    2013-03-01

    Prostate brachytherapy is a form of radiotherapy for treating prostate cancer where the radiation sources are seeds inserted into the prostate. Accurate localization of seeds during prostate brachytherapy is essential to the success of intraoperative treatment planning. The current standard modality used in intraoperative seeds localization is transrectal ultrasound. Transrectal ultrasound, however, suffers in image quality due to several factors such speckle, shadowing, and off-axis seed orientation. Photoacoustic imaging, based on the photoacoustic phenomenon, is an emerging imaging modality. The contrast generating mechanism in photoacoustic imaging is optical absorption that is fundamentally different from conventional B-mode ultrasound which depicts changes in acoustic impedance. A photoacoustic imaging system is developed using a commercial ultrasound system. To improve imaging contrast and depth penetration, absorption enhancing coating is applied to the seeds. In comparison to bare seeds, approximately 18.5 dB increase in signal-to-noise ratio as well as a doubling of imaging depth are achieved. Our results demonstrate that the coating of the seeds can further improve the discernibility of the seeds.

  12. Improving high resolution retinal image quality using speckle illumination HiLo imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Metha, Andrew

    2014-01-01

    Retinal image quality from flood illumination adaptive optics (AO) ophthalmoscopes is adversely affected by out-of-focus light scatter due to the lack of confocality. This effect is more pronounced in small eyes, such as that of rodents, because the requisite high optical power confers a large dioptric thickness to the retina. A recently-developed structured illumination microscopy (SIM) technique called HiLo imaging has been shown to reduce the effect of out-of-focus light scatter in flood illumination microscopes and produce pseudo-confocal images with significantly improved image quality. In this work, we adopted the HiLo technique to a flood AO ophthalmoscope and performed AO imaging in both (physical) model and live rat eyes. The improvement in image quality from HiLo imaging is shown both qualitatively and quantitatively by using spatial spectral analysis. PMID:25136486

  13. DIDA - Dynamic Image Disparity Analysis.

    DTIC Science & Technology

    1982-12-31

    Understanding, Dynamic Image Analysis , Disparity Analysis, Optical Flow, Real-Time Processing ___ 20. ABSTRACT (Continue on revere side If necessary aid identify...three aspects of dynamic image analysis must be studied: effectiveness, generality, and efficiency. In addition, efforts must be made to understand the...environment. A better understanding of the need for these Limiting constraints is required. Efficiency is obviously important if dynamic image analysis is

  14. Improved Guided Image Fusion for Magnetic Resonance and Computed Tomography Imaging

    PubMed Central

    Jameel, Amina

    2014-01-01

    Improved guided image fusion for magnetic resonance and computed tomography imaging is proposed. Existing guided filtering scheme uses Gaussian filter and two-level weight maps due to which the scheme has limited performance for images having noise. Different modifications in filter (based on linear minimum mean square error estimator) and weight maps (with different levels) are proposed to overcome these limitations. Simulation results based on visual and quantitative analysis show the significance of proposed scheme. PMID:24695586

  15. Improving Performance During Image-Guided Procedures

    PubMed Central

    Duncan, James R.; Tabriz, David

    2015-01-01

    Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628

  16. Single Image Dehazing for Visibility Improvement

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Ji, D.

    2015-08-01

    Images captured in foggy weather conditions often suffer from poor visibility, which will create a lot of impacts on the outdoor computer vision systems, such as video surveillance, intelligent transportation assistance system, remote sensing space cameras and so on. In this paper, we propose a new transmission estimated method to improve the visibility of single input image (with fog or haze), as well as the image's details. Our approach stems from two important statistical observations about haze-free images and the haze itself. First, the famous dark channel prior, a statistics of the haze-free outdoor images, can be used to estimate the thickness of the haze; and second, gradient prior law of transmission maps, which is based on dark channel prior. By integrating these two priors, to estimate the unknown scene transmission map is modeled into a TV-regularization optimization problem. The experimental results show that the proposed approach can effectively improve the visibility and keep the details of fog degraded images in the meanwhile.

  17. Improved Determination of the Myelin Water Fraction in Human Brain using Magnetic Resonance Imaging through Bayesian Analysis of mcDESPOT

    PubMed Central

    Bouhrara, Mustapha; Spencer, Richard G.

    2015-01-01

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in human brain. However, even for the simplest two-pool signal model consisting of MWF and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNR), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high dimensional nature of mcDESPOT signal model, and, thereby, the high dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of MWF parameter, the introduced Bayesian analyses use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in-vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated the markedly improved accuracy and precision in the estimation of MWF using these Bayesian

  18. Improved determination of the myelin water fraction in human brain using magnetic resonance imaging through Bayesian analysis of mcDESPOT.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2016-02-15

    Myelin water fraction (MWF) mapping with magnetic resonance imaging has led to the ability to directly observe myelination and demyelination in both the developing brain and in disease. Multicomponent driven equilibrium single pulse observation of T1 and T2 (mcDESPOT) has been proposed as a rapid approach for multicomponent relaxometry and has been applied to map MWF in the human brain. However, even for the simplest two-pool signal model consisting of myelin-associated and non-myelin-associated water, the dimensionality of the parameter space for obtaining MWF estimates remains high. This renders parameter estimation difficult, especially at low-to-moderate signal-to-noise ratios (SNRs), due to the presence of local minima and the flatness of the fit residual energy surface used for parameter determination using conventional nonlinear least squares (NLLS)-based algorithms. In this study, we introduce three Bayesian approaches for analysis of the mcDESPOT signal model to determine MWF. Given the high-dimensional nature of the mcDESPOT signal model, and, therefore the high-dimensional marginalizations over nuisance parameters needed to derive the posterior probability distribution of the MWF, the Bayesian analyses introduced here use different approaches to reduce the dimensionality of the parameter space. The first approach uses normalization by average signal amplitude, and assumes that noise can be accurately estimated from signal-free regions of the image. The second approach likewise uses average amplitude normalization, but incorporates a full treatment of noise as an unknown variable through marginalization. The third approach does not use amplitude normalization and incorporates marginalization over both noise and signal amplitude. Through extensive Monte Carlo numerical simulations and analysis of in vivo human brain datasets exhibiting a range of SNR and spatial resolution, we demonstrated markedly improved accuracy and precision in the estimation of MWF

  19. Helping Improve a Child's Self-Image.

    ERIC Educational Resources Information Center

    Divney, Esther P.

    This document reviews a doctoral dissertation on the self-concepts of Negro children for suggestions teachers can use in the classroom to improve the self-images of their own students. Many psychologists and educators view the attitudes and conceptions that the child has about himself as the central factor in his personality, and studies have…

  20. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  2. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  3. Spreadsheet-Like Image Analysis

    DTIC Science & Technology

    1992-08-01

    1 " DTIC AD-A254 395 S LECTE D, ° AD-E402 350 Technical Report ARPAD-TR-92002 SPREADSHEET-LIKE IMAGE ANALYSIS Paul Willson August 1992 U.S. ARMY...August 1992 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS SPREADSHEET-LIKE IMAGE ANALYSIS 6. AUTHOR(S) Paul Willson 7. PERFORMING ORGANIZATION NAME(S) AND...14. SUBJECT TERMS 15. NUMBER OF PAGES Image analysis , nondestructive inspection, spreadsheet, Macintosh software, 14 neural network, signal processing

  4. Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1981-04-01

    UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface

  5. Scale-Specific Multifractal Medical Image Analysis

    PubMed Central

    Braverman, Boris

    2013-01-01

    Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

  6. Imaging Arrays With Improved Transmit Power Capability

    PubMed Central

    Zipparo, Michael J.; Bing, Kristin F.; Nightingale, Kathy R.

    2010-01-01

    Bonded multilayer ceramics and composites incorporating low-loss piezoceramics have been applied to arrays for ultrasound imaging to improve acoustic transmit power levels and to reduce internal heating. Commercially available hard PZT from multiple vendors has been characterized for microstructure, ability to be processed, and electroacoustic properties. Multilayers using the best materials demonstrate the tradeoffs compared with the softer PZT5-H typically used for imaging arrays. Three-layer PZT4 composites exhibit an effective dielectric constant that is three times that of single layer PZT5H, a 50% higher mechanical Q, a 30% lower acoustic impedance, and only a 10% lower coupling coefficient. Application of low-loss multilayers to linear phased and large curved arrays results in equivalent or better element performance. A 3-layer PZT4 composite array achieved the same transmit intensity at 40% lower transmit voltage and with a 35% lower face temperature increase than the PZT-5 control. Although B-mode images show similar quality, acoustic radiation force impulse (ARFI) images show increased displacement for a given drive voltage. An increased failure rate for the multilayers following extended operation indicates that further development of the bond process will be necessary. In conclusion, bonded multilayer ceramics and composites allow additional design freedom to optimize arrays and improve the overall performance for increased acoustic output while maintaining image quality. PMID:20875996

  7. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur (Inventor); Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  8. Improved image guidance of coronary stent deployment

    NASA Astrophysics Data System (ADS)

    Close, Robert A.; Abbey, Craig K.; Whiting, James S.

    2000-04-01

    Accurate placement and expansion of coronary stents is hindered by the fact that most stents are only slightly radiopaque, and hence difficult to see in a typical coronary x-rays. We propose a new technique for improved image guidance of multiple coronary stents deployment using layer decomposition of cine x-ray images of stented coronary arteries. Layer decomposition models the cone-beam x-ray projections through the chest as a set of superposed layers moving with translation, rotation, and scaling. Radiopaque markers affixed to the guidewire or delivery balloon provide a trackable feature so that the correct vessel motion can be measured for layer decomposition. In addition to the time- averaged layer image, we also derive a background-subtracted image sequence which removes moving background structures. Layer decomposition of contrast-free vessels can be used to guide placement of multiple stents and to assess uniformity of stent expansion. Layer decomposition of contrast-filled vessels can be used to measure residual stenosis to determine the adequacy of stent expansion. We demonstrate that layer decomposition of a clinical cine x-ray image sequence greatly improves the visibility of a previously deployed stent. We show that layer decomposition of contrast-filled vessels removes background structures and reduces noise.

  9. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    PubMed Central

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-01-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images, but also by detection sensitivity. As the probe size is reduced to below 1 µm, for example, a low signal in each pixel limits lateral resolution due to counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure. PMID:24912432

  10. Improving Secondary Ion Mass Spectrometry Image Quality with Image Fusion

    NASA Astrophysics Data System (ADS)

    Tarolli, Jay G.; Jackson, Lauren M.; Winograd, Nicholas

    2014-12-01

    The spatial resolution of chemical images acquired with cluster secondary ion mass spectrometry (SIMS) is limited not only by the size of the probe utilized to create the images but also by detection sensitivity. As the probe size is reduced to below 1 μm, for example, a low signal in each pixel limits lateral resolution because of counting statistics considerations. Although it can be useful to implement numerical methods to mitigate this problem, here we investigate the use of image fusion to combine information from scanning electron microscope (SEM) data with chemically resolved SIMS images. The advantage of this approach is that the higher intensity and, hence, spatial resolution of the electron images can help to improve the quality of the SIMS images without sacrificing chemical specificity. Using a pan-sharpening algorithm, the method is illustrated using synthetic data, experimental data acquired from a metallic grid sample, and experimental data acquired from a lawn of algae cells. The results show that up to an order of magnitude increase in spatial resolution is possible to achieve. A cross-correlation metric is utilized for evaluating the reliability of the procedure.

  11. Improved terahertz imaging with a sparse synthetic aperture array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuopeng; Buma, Takashi

    2010-02-01

    Sparse arrays are highly attractive for implementing two-dimensional arrays, but come at the cost of degraded image quality. We demonstrate significantly improved performance by exploiting the coherent ultrawideband nature of singlecycle THz pulses. We compute two weighting factors to each time-delayed signal before final summation to form the reconstructed image. The first factor employs cross-correlation analysis to measure the degree of walk-off between timedelayed signals of neighboring elements. The second factor measures the spatial coherence of the time-delayed delayed signals. Synthetic aperture imaging experiments are performed with a THz time-domain system employing a mechanically scanned single transceiver element. Cross-sectional imaging of wire targets is performed with a onedimensional sparse array with an inter-element spacing of 1.36 mm (over four λ at 1 THz). The proposed image reconstruction technique improves image contrast by 15 dB, which is impressive considering the relatively few elements in the array. En-face imaging of a razor blade is also demonstrated with a 56 x 56 element two-dimensional array, showing reduced image artifacts with adaptive reconstruction. These encouraging results suggest that the proposed image reconstruction technique can be highly beneficial to the development of large area two-dimensional THz arrays.

  12. Evidential Reasoning in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths

  13. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  14. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  15. An improved SIFT algorithm based on KFDA in image registration

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Lijuan; Huo, Jinfeng

    2016-03-01

    As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.

  16. Transfer learning improves supervised image segmentation across imaging protocols.

    PubMed

    van Opbroek, Annegreet; Ikram, M Arfan; Vernooij, Meike W; de Bruijne, Marleen

    2015-05-01

    The variation between images obtained with different scanners or different imaging protocols presents a major challenge in automatic segmentation of biomedical images. This variation especially hampers the application of otherwise successful supervised-learning techniques which, in order to perform well, often require a large amount of labeled training data that is exactly representative of the target data. We therefore propose to use transfer learning for image segmentation. Transfer-learning techniques can cope with differences in distributions between training and target data, and therefore may improve performance over supervised learning for segmentation across scanners and scan protocols. We present four transfer classifiers that can train a classification scheme with only a small amount of representative training data, in addition to a larger amount of other training data with slightly different characteristics. The performance of the four transfer classifiers was compared to that of standard supervised classification on two magnetic resonance imaging brain-segmentation tasks with multi-site data: white matter, gray matter, and cerebrospinal fluid segmentation; and white-matter-/MS-lesion segmentation. The experiments showed that when there is only a small amount of representative training data available, transfer learning can greatly outperform common supervised-learning approaches, minimizing classification errors by up to 60%.

  17. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  18. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  19. An improved differential box-counting method of image segmentation

    NASA Astrophysics Data System (ADS)

    Li, Cancan; Cheng, Longfei; He, Tao; Chen, Lang; Yu, Fei; Yang, Liangen

    2016-01-01

    Fractal dimension is an important quantitative characteristic of a image, which can be widely used in image analysis. Differential box-counting method which is one of many calculation methods of a fractal dimension has been frequently used due to its simple calculation . In differential box-counting method, a window size M is limited in the integer power of 2. It leads to inaccurate calculation results of a fractal dimension. Aiming at solving the issues , in this paper, an improved algorithm is discussed that the window size M has been improved to be able to accommodate non-integer power of 2, and making the calculated fractal dimension error smaller. In order to verify superiority of the improved algorithm, the values of fractal dimension are regarded as parameters, and are applied for image segmentation combined with Ostu algorithm . Both traditional and improved differential box-counting methods are respectively used to estimate fractal dimensions and do threshold segmentation for a thread image . The experimental results show that image segmentation details by improved differential box-counting method are more obvious than that by traditional differential box-counting method, with less impurities, clearer target outline and better segmentation effect.

  20. Improvement in dynamic magnetic resonance imaging thermometry

    NASA Astrophysics Data System (ADS)

    Guo, Jun-Yu

    This dissertation is focused on improving MRI Thermometry (MRIT) techniques. The application of the spin-lattice relaxation constant is investigated in which T1 is used as indicator to measure the temperature of flowing fluid such as blood. Problems associated with this technique are evaluated, and a new method to improve the consistency and repeatability of T1 measurements is presented. The new method combines curve fitting with a measure of the curve null point to acquire more accurate and consistent T1 values. A novel method called K-space Inherited Parallel Acquisition (KIPA) is developed to achieve faster dynamic temperature measurements. Localized reconstruction coefficients are used to achieve higher reduction factors, and lower noise and artifact levels compared to that of GeneRalized Autocalibrating Partially Parallel Acquisition (GRAPPA) reconstruction. Artifacts in KIPA images are significantly reduced, and SNR is largely improved in comparison with that in GRAPPA images. The Root-Mean-Square (RMS) error of temperature for GRAPPA is 2 to 5 times larger than that for KIPA. Finally, the accuracy and comparison of the effects of motion on three parallel imaging methods: SENSE (SENSitivity Encoding), VSENSE (Variable-density SENSE) and KIPA are estimated. According to the investigation, KIPA is the most accurate and robust method among all three methods for studies with or without motion. The ratio of the normalized RMS (NRMS) error for SENSE to that for KIPA is within the range from 1 to 3.7. The ratio of the NRMS error for VSENSE to that for KIPA is about 1 to 2. These factors change with the reduction factor, motion and subject. In summary, the new strategy and method for the fast noninvasive measurement of T1 of flowing blood are proposed to improve stability and precision. The novel parallel reconstruction algorithm, KIPA, is developed to improve the temporal and spatial resolution for the PRF method. The motion effects on the KIPA method are also

  1. Machine learning applications in cell image analysis.

    PubMed

    Kan, Andrey

    2017-04-04

    Machine learning (ML) refers to a set of automatic pattern recognition methods that have been successfully applied across various problem domains, including biomedical image analysis. This review focuses on ML applications for image analysis in light microscopy experiments with typical tasks of segmenting and tracking individual cells, and modelling of reconstructed lineage trees. After describing a typical image analysis pipeline and highlighting challenges of automatic analysis (for example, variability in cell morphology, tracking in presence of clutters) this review gives a brief historical outlook of ML, followed by basic concepts and definitions required for understanding examples. This article then presents several example applications at various image processing stages, including the use of supervised learning methods for improving cell segmentation, and the application of active learning for tracking. The review concludes with remarks on parameter setting and future directions.Immunology and Cell Biology advance online publication, 4 April 2017; doi:10.1038/icb.2017.16.

  2. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version

  3. Paraxial ghost image analysis

    NASA Astrophysics Data System (ADS)

    Abd El-Maksoud, Rania H.; Sasian, José M.

    2009-08-01

    This paper develops a methodology to model ghost images that are formed by two reflections between the surfaces of a multi-element lens system in the paraxial regime. An algorithm is presented to generate the ghost layouts from the nominal layout. For each possible ghost layout, paraxial ray tracing is performed to determine the ghost Gaussian cardinal points, the size of the ghost image at the nominal image plane, the location and diameter of the ghost entrance and exit pupils, and the location and diameter for the ghost entrance and exit windows. The paraxial ghost irradiance point spread function is obtained by adding up the irradiance contributions for all ghosts. Ghost simulation results for a simple lens system are provided. This approach provides a quick way to analyze ghost images in the paraxial regime.

  4. Radiologist and automated image analysis

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.

    1999-07-01

    Significant advances are being made in the area of automated medical image analysis. Part of the progress is due to the general advances being made in the types of algorithms used to process images and perform various detection and recognition tasks. A more important reason for this growth in medical image analysis processes, may be due however to a very different reason. The use of computer workstations, digital image acquisition technologies and the use of CRT monitors for display of medical images for primary diagnostic reading is becoming more prevalent in radiology departments around the world. With the advance in computer- based displays, however, has come the realization that displaying images on a CRT monitor is not the same as displaying film on a viewbox. There are perceptual, cognitive and ergonomic issues that must be considered if radiologists are to accept this change in technology and display. The bottom line is that radiologists' performance must be evaluated with these new technologies and image analysis techniques in order to verify that diagnostic performance is at least as good with these new technologies and image analysis procedures as with film-based displays. The goal of this paper is to address some of the perceptual, cognitive and ergonomic issues associated with reading radiographic images from digital displays.

  5. Improved imaging algorithm for bridge crack detection

    NASA Astrophysics Data System (ADS)

    Lu, Jingxiao; Song, Pingli; Han, Kaihong

    2012-04-01

    This paper present an improved imaging algorithm for bridge crack detection, through optimizing the eight-direction Sobel edge detection operator, making the positioning of edge points more accurate than without the optimization, and effectively reducing the false edges information, so as to facilitate follow-up treatment. In calculating the crack geometry characteristics, we use the method of extracting skeleton on single crack length. In order to calculate crack area, we construct the template of area by making logical bitwise AND operation of the crack image. After experiment, the results show errors of the crack detection method and actual manual measurement are within an acceptable range, meet the needs of engineering applications. This algorithm is high-speed and effective for automated crack measurement, it can provide more valid data for proper planning and appropriate performance of the maintenance and rehabilitation processes of bridge.

  6. Multispectral analysis of multimodal images.

    PubMed

    Kvinnsland, Yngve; Brekke, Njål; Taxt, Torfinn M; Grüner, Renate

    2009-01-01

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentations that seem to be sensible. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.

  7. Automatic identification of ROI in figure images toward improving hybrid (text and image) biomedical document retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Rahman, Md Mahmudur; Govindaraju, Venu; Thoma, George R.

    2011-01-01

    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. They appear in specialized databases or in biomedical publications and are not meaningfully retrievable using primarily textbased retrieval systems. The task of automatically finding the images in an article that are most useful for the purpose of determining relevance to a clinical situation is quite challenging. An approach is to automatically annotate images extracted from scientific publications with respect to their usefulness for CDS. As an important step toward achieving the goal, we proposed figure image analysis for localizing pointers (arrows, symbols) to extract regions of interest (ROI) that can then be used to obtain meaningful local image content. Content-based image retrieval (CBIR) techniques can then associate local image ROIs with identified biomedical concepts in figure captions for improved hybrid (text and image) retrieval of biomedical articles. In this work we present methods that make robust our previous Markov random field (MRF)-based approach for pointer recognition and ROI extraction. These include use of Active Shape Models (ASM) to overcome problems in recognizing distorted pointer shapes and a region segmentation method for ROI extraction. We measure the performance of our methods on two criteria: (i) effectiveness in recognizing pointers in images, and (ii) improved document retrieval through use of extracted ROIs. Evaluation on three test sets shows 87% accuracy in the first criterion. Further, the quality of document retrieval using local visual features and text is shown to be better than using visual features alone.

  8. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  9. Improved Imaging Resolution in Desorption Electrospray Ionization Mass Spectrometry

    SciTech Connect

    Kertesz, Vilmos; Van Berkel, Gary J

    2008-01-01

    Imaging resolution of desorption electrospray ionization mass spectrometry (DESI-MS) was investigated using printed patterns on paper and thin-layer chromatography (TLC) plate surfaces. Resolution approaching 40 m was achieved with a typical DESI-MS setup, which is approximately 5 times better than the best resolution reported previously. This improvement was accomplished with careful control of operational parameters (particularly spray tip-to-surface distance, solvent flow rate, and spacing of lane scans). Also, an appropriately strong analyte/surface interaction and uniform surface texture on the size scale no larger that the desired imaging resolution were required to achieve this resolution. Overall, conditions providing the smallest possible effective desorption/ionization area in the DESI impact plume region and minimizing the analyte redistribution on the surface during analysis led to the improved DESI-MS imaging resolution.

  10. Image quality improvement of polygon computer generated holography.

    PubMed

    Pang, Xiao-Ning; Chen, Ding-Chen; Ding, Yi-Cong; Chen, Yi-Gui; Jiang, Shao-Ji; Dong, Jian-Wen

    2015-07-27

    Quality of holographic reconstruction image is seriously affected by undesirable messy fringes in polygon-based computer generated holography. Here, several methods have been proposed to improve the image quality, including a modified encoding method based on spatial-domain Fraunhofer diffraction and a specific LED light source. Fast Fourier transform is applied to the basic element of polygon and fringe-invisible reconstruction is achieved after introducing initial random phase. Furthermore, we find that the image with satisfactory fidelity and sharp edge can be reconstructed by either a LED with moderate coherence level or a modulator with small pixel pitch. Satisfactory image quality without obvious speckle noise is observed under the illumination of bandpass-filter-aided LED. The experimental results are consistent well with the correlation analysis on the acceptable viewing angle and the coherence length of the light source.

  11. Narrow-band imaging does not improve detection of colorectal polyps when compared to conventional colonoscopy: a randomized controlled trial and meta-analysis of published studies

    PubMed Central

    2011-01-01

    Background A colonoscopy may frequently miss polyps and cancers. A number of techniques have emerged to improve visualization and to reduce the rate of adenoma miss. Methods We conducted a randomized controlled trial (RCT) in two clinics of the Gastrointestinal Department of the Sanitas University Foundation in Bogota, Colombia. Eligible adult patients presenting for screening or diagnostic elective colonoscopy were randomlsy allocated to undergo conventional colonoscopy or narrow-band imaging (NBI) during instrument withdrawal by three experienced endoscopists. For the systematic review, studies were identified from the Cochrane Library, PUBMED and LILACS and assessed using the Cochrane risk of bias tool. Results We enrolled a total of 482 patients (62.5% female), with a mean age of 58.33 years (SD 12.91); 241 into the intervention (NBI) colonoscopy and 241 into the conventional colonoscopy group. Most patients presented for diagnostic colonoscopy (75.3%). The overall rate of polyp detection was significantly higher in the conventional group compared to the NBI group (RR 0.75, 95%CI 0.60 to 0.96). However, no significant differences were found in the mean number of polyps (MD -0.1; 95%CI -0.25 to 0.05), and the mean number of adenomas (MD 0.04 95%CI -0.09 to 0.17). Meta-analysis of studies (regardless of indication) did not find any significant differences in the mean number of polyps (5 RCT, 2479 participants; WMD -0.07 95% CI -0.21 to 0.07; I2 68%), the mean number of adenomas (8 RCT, 3517 participants; WMD -0.08 95% CI -0.17; 0.01 to I2 62%) and the rate of patients with at least one adenoma (8 RCT, 3512 participants, RR 0.96 95% CI 0.88 to 1,04;I2 0%). Conclusion NBI does not improve detection of colorectal polyps when compared to conventional colonoscopy (Australian New Zealand Clinical Trials Registry ACTRN12610000456055). PMID:21943365

  12. Improved Reconstruction for MR Spectroscopic Imaging

    PubMed Central

    Maudsley, Andrew A.

    2009-01-01

    Sensitivity limitations of in vivo magnetic resonance spectroscopic imaging (MRSI) require that the extent of spatial-frequency (k-space) sampling be limited, thereby reducing spatial resolution and increasing the effects of Gibbs ringing that is associated with the use of Fourier transform reconstruction. Additional problems occur in the spectral dimension, where quantitation of individual spectral components is made more difficult by the typically low signal-to-noise ratios, variable lineshapes, and baseline distortions, particularly in areas of significant magnetic field inhomogeneity. Given the potential of in vivo MRSI measurements for a number of clinical and biomedical research applications, there is considerable interest in improving the quality of the metabolite image reconstructions. In this report, a reconstruction method is described that makes use of parametric modeling and MRI-derived tissue distribution functions to enhance the MRSI spatial reconstruction. Additional preprocessing steps are also proposed to avoid difficulties associated with image regions containing spectra of inadequate quality, which are commonly present in the in vivo MRSI data. PMID:17518063

  13. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  14. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  15. Improved PCA + LDA Applies to Gastric Cancer Image Classification Process

    NASA Astrophysics Data System (ADS)

    Gan, Lan; Lv, Wenya; Zhang, Xu; Meng, Xiuming

    Principal component analysis (PCA) and linear discriminant analysis (LDA) are two most widely used pattern recognition methods in the field of feature extraction,while PCA + LDA is often used in image recognition.Here,we apply PCA + LDA to gastric cancer image feature classification, but the traditional PCA + LDA dimension reduction method has good effect on the training sample dimensionality and clustering, the effect on test samples dimension reduction and clustering is very poor, that is, the traditional PCA + LDA exists Generalization problem on the test samples. To solve this problem, this paper proposes an improved PCA + LDA method, which mainly considers from the LDA transform; improves the traditional PCA + LDA;increase the generalization performance of LDA on test samples and increases the classification accuracy on test samples. The experiment proves that the method can achieve good clustering.

  16. Deep Learning in Medical Image Analysis

    PubMed Central

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2016-01-01

    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements. PMID:28301734

  17. Single-image molecular analysis for accelerated fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yan Mei

    2011-03-01

    We have developed a new single-molecule fluorescence imaging analysis method, SIMA, to improve the temporal resolution of single-molecule localization and tracking studies to millisecond timescales without compromising the nanometer range spatial resolution [1,2]. In this method, the width of the fluorescence intensity profile of a static or mobile molecule, imaged using submillisecond to milliseconds exposure time, is used for localization and dynamics analysis. We apply this method to three single-molecule studies: (1) subdiffraction molecular separation measurements, (2) axial localization precision measurements, and (3) protein diffusion coefficient measurements in free solution. Applications of SIMA in flagella IFT particle analysis, localizations of UgtP (a cell division regulator protein) in live cells, and diffusion coefficient measurement of LacI in vitro and in vivo will be discussed.

  18. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  19. Multidimensional Image Analysis for High Precision Radiation Therapy.

    PubMed

    Arimura, Hidetaka; Soufi, Mazen; Haekal, Mohammad

    2017-01-01

    High precision radiation therapy (HPRT) has been improved by utilizing conventional image engineering technologies. However, different frameworks are necessary for further improvement of HPRT. This review paper attempted to define the multidimensional image and what multidimensional image analysis is, which may be feasible for increasing the accuracy of HPRT. A number of researches in radiation therapy field have been introduced to understand the multidimensional image analysis. Multidimensional image analysis could greatly assist clinical staffs in radiation therapy planning, treatment, and prediction of treatment outcomes.

  20. Methodological improvements in voxel-based analysis of diffusion tensor images: applications to study the impact of apolipoprotein E on white matter integrity.

    PubMed

    Newlander, Shawn M; Chu, Alan; Sinha, Usha S; Lu, Po H; Bartzokis, George

    2014-02-01

    To identify regional differences in apparent diffusion coefficient (ADC) and fractional anisotropy (FA) using customized preprocessing before voxel-based analysis (VBA) in 14 normal subjects with the specific genes that decrease (apolipoprotein [APO] E ε2) and that increase (APOE ε4) the risk of Alzheimer's disease. Diffusion tensor images (DTI) acquired at 1.5 Tesla were denoised with a total variation tensor regularization algorithm before affine and nonlinear registration to generate a common reference frame for the image volumes of all subjects. Anisotropic and isotropic smoothing with varying kernel sizes was applied to the aligned data before VBA to determine regional differences between cohorts segregated by allele status. VBA on the denoised tensor data identified regions of reduced FA in APOE ε4 compared with the APOE ε2 healthy older carriers. The most consistent results were obtained using the denoised tensor and anisotropic smoothing before statistical testing. In contrast, isotropic smoothing identified regional differences for small filter sizes alone, emphasizing that this method introduces bias in FA values for higher kernel sizes. Voxel-based DTI analysis can be performed on low signal to noise ratio images to detect subtle regional differences in cohorts using the proposed preprocessing techniques. Copyright © 2013 Wiley Periodicals, Inc.

  1. METHODOLOGICAL IMPROVEMENTS IN VOXEL BASED ANALYSIS OF DIFFUSION TENSOR IMAGES: APPLICATIONS TO STUDY THE IMPACT OF APOE ON WHITE MATTER INTEGRITY

    PubMed Central

    Newlander, Shawn M.; Chu, Alan; Sinha, Usha S.; Lu, Po H; Bartzokis, George

    2013-01-01

    Purpose To identify regional differences in apparent diffusion coefficient (ADC) and fractional anisotropy (FA) using customized preprocessing prior to voxel-based analysis (VBA) in 14 normal subjects with the specific genes that decrease (APOE ε2) and that increase (APOE ε4) the risk of Alzheimer’s Disease. Materials and Methods Diffusion Tensor images (DTI) acquired at 1.5 T were denoised with a total variation tensor regularization algorithm prior to affine and nonlinear registration to generate a common reference frame for the image volumes of all subjects. Anisotropic and isotropic smoothing with varying kernel sizes was applied to the aligned data prior to VBA to determine regional differences between cohorts segregated by allele status. Results VBA on the denoised tensor data identified regions of reduced FA in APOE ε4 compared to the APOE ε2 healthy older carriers. The most consistent results were obtained using the denoised tensor and anisotropic smoothing prior to statistical testing. In contrast, isotropic smoothing identified regional differences for small filter sizes alone, emphasizing that this method introduces bias in FA values for higher kernel sizes. Conclusion Voxel-based DTI analysis can be performed on low SNR images to detect subtle regional differences in cohorts using the proposed pre-processing techniques. PMID:23589355

  2. Optoacoustic imaging system with improved collection efficiency

    NASA Astrophysics Data System (ADS)

    Tsyboulski, Dmitri; Conjusteau, André; Ermilov, Sergey A.; Brecht, Hans-Peter F.; Liopo, Anton; Su, Richard; Nadvoretsky, Vyacheslav; Oraevsky, Alexander A.

    2011-03-01

    We introduce a novel experimental design for non-invasive scanning optoacoustic microscopy that utilizes a parabolic surface for ultrasound focusing. We demonstrate that off-axis parabolic mirrors made of sufficiently high acoustic impedance materials work as ideal reflectors in a wide range of apertures and provide lossless conversion of a spherical acoustic wavefront into a plane wave. We further test the performance of a custom optoacoustic imaging setup which was developed and built based on these principles. The achieved resolution limit of 0.3 mm, with NA of 0.5 and the transducer bandwidth of 5 MHz, matches the resolution limit defined by diffraction. Although further improvements of current experimental setup are required to achieve resolution similar to leading microscopy systems, this proof-of-concept work demonstrates the viability of the proposed design for optoacoustic microscopy applications.

  3. Anmap: Image and data analysis

    NASA Astrophysics Data System (ADS)

    Alexander, Paul; Waldram, Elizabeth; Titterington, David; Rees, Nick

    2014-11-01

    Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.

  4. Image Analysis and Modeling

    DTIC Science & Technology

    1975-08-01

    bands *hlch maximizes the mutual Information between the remaining bands. In order to compute the mutual Information between spectral bands, the...University Sponsored by Defense Advsnced Research Projects Agency ARPA Order No. 2893 oJ^ Approved for public relesse; distribution unlimited...Improved versions of BLOB, and In the meantime looking into techniques which would give results that are Independent of the order of processing. One

  5. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  6. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  7. Multivariate image analysis in biomedicine.

    PubMed

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  8. Restriction Spectrum Imaging Improves Risk Stratification in Patients with Glioblastoma.

    PubMed

    Krishnan, A P; Karunamuni, R; Leyden, K M; Seibert, T M; Delfanti, R L; Kuperman, J M; Bartsch, H; Elbe, P; Srikant, A; Dale, A M; Kesari, S; Piccioni, D E; Hattangadi-Gluth, J A; Farid, N; McDonald, C R; White, N S

    2017-05-01

    ADC as a marker of tumor cellularity has been promising for evaluating the response to therapy in patients with glioblastoma but does not successfully stratify patients according to outcomes, especially in the upfront setting. Here we investigate whether restriction spectrum imaging, an advanced diffusion imaging model, performed after an operation but before radiation therapy, could improve risk stratification in patients with newly diagnosed glioblastoma relative to ADC. Pre-radiation therapy diffusion-weighted and structural imaging of 40 patients with glioblastoma were examined retrospectively. Restriction spectrum imaging and ADC-based hypercellularity volume fraction (restriction spectrum imaging-FLAIR volume fraction, restriction spectrum imaging-contrast-enhanced volume fraction, ADC-FLAIR volume fraction, ADC-contrast-enhanced volume fraction) and intensities (restriction spectrum imaging-FLAIR 90th percentile, restriction spectrum imaging-contrast-enhanced 90th percentile, ADC-FLAIR 10th percentile, ADC-contrast-enhanced 10th percentile) within the contrast-enhanced and FLAIR hyperintensity VOIs were calculated. The association of diffusion imaging metrics, contrast-enhanced volume, and FLAIR hyperintensity volume with progression-free survival and overall survival was evaluated by using Cox proportional hazards models. Among the diffusion metrics, restriction spectrum imaging-FLAIR volume fraction was the strongest prognostic metric of progression-free survival (P = .036) and overall survival (P = .007) in a multivariate Cox proportional hazards analysis, with higher values indicating earlier progression and shorter survival. Restriction spectrum imaging-FLAIR 90th percentile was also associated with overall survival (P = .043), with higher intensities, indicating shorter survival. None of the ADC metrics were associated with progression-free survival/overall survival. Contrast-enhanced volume exhibited a trend toward significance for overall survival (P

  9. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  10. Dynamic and still microcirculatory image analysis for quantitative microcirculation research

    NASA Astrophysics Data System (ADS)

    Ying, Xiaoyou; Xiu, Rui-juan

    1994-05-01

    Based on analyses of various types of digital microcirculatory image (DMCI), we summed up the image features of DMCI, the digitizing demands for digital microcirculatory imaging, and the basic characteristics of the DMCI processing. A dynamic and still imaging separation processing (DSISP) mode was designed for developing a DMCI workstation and the DMCI processing. Original images in this study were clinical microcirculatory images from human finger nail-bed and conjunctiva microvasculature, and intravital microvascular network images from animal tissue or organs. A series of dynamic and still microcirculatory image analysis functions were developed in this study. The experimental results indicate most of the established analog video image analysis methods for microcirculatory measurement could be realized in a more flexible way based on the DMCI. More information can be rapidly extracted from the quality improved DMCI by employing intelligence digital image analysis methods. The DSISP mode is very suitable for building a DMCI workstation.

  11. Analyzing and Improving Image Quality in Reflective Ghost Imaging

    DTIC Science & Technology

    2011-02-01

    imaging." Phys. Rev. A 79, 023833 (2009). [7] R . E . Meyers , K. S. Deacon. and Y. Shih, "Ghost-imaging experiment by measuring reflected photons," Phys...Rev. A 77, 041801 (2008). [8] R . E . Meyers and K. S. Deacon, "Quantum ghost imaging experiments at ARL," Proc. SPIE 7815. 781501 (2010). [9] J. H

  12. Figure content analysis for improved biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Apostolova, Emilia; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2009-01-01

    Biomedical images are invaluable in medical education and establishing clinical diagnosis. Clinical decision support (CDS) can be improved by combining biomedical text with automatically annotated images extracted from relevant biomedical publications. In a previous study we reported 76.6% accuracy using supervised machine learning on the feasibility of automatically classifying images by combining figure captions and image content for usefulness in finding clinical evidence. Image content extraction is traditionally applied on entire images or on pre-determined image regions. Figure images articles vary greatly limiting benefit of whole image extraction beyond gross categorization for CDS due to the large variety. However, text annotations and pointers on them indicate regions of interest (ROI) that are then referenced in the caption or discussion in the article text. We have previously reported 72.02% accuracy in text and symbols localization but we failed to take advantage of the referenced image locality. In this work we combine article text analysis and figure image analysis for localizing pointer (arrows, symbols) to extract ROI pointed that can then be used to measure meaningful image content and associate it with the identified biomedical concepts for improved (text and image) content-based retrieval of biomedical articles. Biomedical concepts are identified using National Library of Medicine's Unified Medical Language System (UMLS) Metathesaurus. Our methods report an average precision and recall of 92.3% and 75.3%, respectively on identifying pointing symbols in images from a randomly selected image subset made available through the ImageCLEF 2008 campaign.

  13. Deep Learning in Medical Image Analysis.

    PubMed

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-03-09

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  14. A Unified Mathematical Approach to Image Analysis.

    DTIC Science & Technology

    1987-08-31

    describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .

  15. Contrast-enhanced magnetic resonance angiography in carotid artery disease: does automated image registration improve image quality?

    PubMed

    Menke, Jan; Larsen, Jörg

    2009-05-01

    Contrast-enhanced magnetic resonance angiography (MRA) is a noninvasive imaging alternative to digital subtraction angiography (DSA) for patients with carotid artery disease. In DSA, image quality can be improved by shifting the mask image if the patient has moved during angiography. This study investigated whether such image registration may also help to improve the image quality of carotid MRA. Data from 370 carotid MRA examinations of patients likely to have carotid artery disease were prospectively collected. The standard nonregistered MRAs were compared to automatically linear, affine and warp registered MRA by using three image quality parameters: the vessel detection probability (VDP) in maximum intensity projection (MIP) images, contrast-to-noise ratio (CNR) in MIP images, and contrast-to-noise ratio in three-dimensional image volumes. A body shift of less than 1 mm occurred in 96.2% of cases. Analysis of variance revealed no significant influence of image registration and body shift on image quality (p > 0.05). In conclusion, standard contrast-enhanced carotid MRA usually requires no image registration to improve image quality and is generally robust against any naturally occurring body shift.

  16. Job Analysis for Continuous Improvement.

    ERIC Educational Resources Information Center

    Weiser, William E.; Herrmann, Barbara Ann

    This booklet describes Job Analysis for Continuous Improvement (JACI), a five-step process incorporating strategic planning, job task identification, needs assessment, solutions, and evaluation. The JACI was developed as a result of a collaboration between the Minnesota Technical College System and 3M's Corporate Plant Engineering Services…

  17. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  18. Multifluorescence 2D gel imaging and image analysis.

    PubMed

    Vormbrock, Ingo; Hartwig, Sonja; Lehr, Stefan

    2012-01-01

    Although image acquisition and analysis are crucial steps within the multifluorescence two-dimensional gel electrophoresis workflow, some basics are frequently not carried out with the necessary diligence. This chapter should help to prevent easily avoidable failures during imaging and image preparation for comparative protein analysis.

  19. Improving resolution of optical coherence tomography for imaging of microstructures

    NASA Astrophysics Data System (ADS)

    Shen, Kai; Lu, Hui; Wang, James H.; Wang, Michael R.

    2015-03-01

    Multi-frame superresolution technique has been used to improve the lateral resolution of spectral domain optical coherence tomography (SD-OCT) for imaging of 3D microstructures. By adjusting the voltages applied to ? and ? galvanometer scanners in the measurement arm, small lateral imaging positional shifts have been introduced among different C-scans. Utilizing the extracted ?-? plane en face image frames from these specially offset C-scan image sets at the same axial position, we have reconstructed the lateral high resolution image by the efficient multi-frame superresolution technique. To further improve the image quality, we applied the latest K-SVD and bilateral total variation denoising algorithms to the raw SD-OCT lateral images before and along with the superresolution processing, respectively. The performance of the SD-OCT of improved lateral resolution is demonstrated by 3D imaging a microstructure fabricated by photolithography and a double-layer microfluidic device.

  20. UV imaging in pharmaceutical analysis.

    PubMed

    Østergaard, Jesper

    2017-08-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution and release testing studies. The review covers the basic principles of the technology and summarizes the main applications in relation to intrinsic dissolution rate determination, excipient compatibility studies and in vitro release characterization of drug substances and vehicles intended for parenteral administration. UV imaging has potential for providing new insights to drug dissolution and release processes in formulation development by real-time monitoring of swelling, precipitation, diffusion and partitioning phenomena. Limitations of current instrumentation are discussed and a perspective to new developments and opportunities given as new instrumentation is emerging. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  2. Method for improving visualization of infrared images

    NASA Astrophysics Data System (ADS)

    Cimbalista, Mario

    2014-05-01

    Thermography has an extremely important difference from the other visual image converting electronic systems, like XRays or ultrasound: the infrared camera operator usually spend hour after hour with his/her eyes looking only at infrared images, sometimes several intermittent hours a day if not six or more continuous hours. This operational characteristic has a very important impact on yield, precision, errors and misinterpretation of the infrared images contents. Despite a great hardware development over the last fifty years, quality infrared thermography still lacks for a solution for these problems. The human eye physiology has not evolved to see infrared radiation neither the mind-brain has the capability to understand and decode infrared information. Chemical processes inside the human eye and functional cells distributions as well as cognitive-perceptual impact of images plays a crucial role in the perception, detection, and other steps of dealing with infrared images. The system presented here, called ThermoScala and patented in USA solves this problem using a coding process applicable to an original infrared image, generated from any value matrix, from any kind of infrared camera to make it much more suitable for human usage, causing a substantial difference in the way the retina and the brain processes the resultant images. The result obtained is a much less exhaustive way to see, identify and interpret infrared images generated by any infrared camera that uses this conversion process.

  3. Improving Underwater Imaging with Ocean Optics Research

    DTIC Science & Technology

    2008-01-01

    imaging model was successfully validated by comparing the visibility of the Secchi disk under this current model, to the visibility predicted by...simulations. [Sponsored by NRL and ONR] References 1 W. Hou, Z. Lee, and A. Weidemann, “Why Does the Secchi Disk Disappear? An Imaging Perspective,” Opt

  4. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    PubMed Central

    Hu, Yuping; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack. PMID:24592159

  5. An improved piecewise linear chaotic map based image encryption algorithm.

    PubMed

    Hu, Yuping; Zhu, Congxu; Wang, Zhijian

    2014-01-01

    An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM) model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  6. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies.

  7. Research on super-resolution image reconstruction based on an improved POCS algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Haiming; Miao, Hong; Yang, Chong; Xiong, Cheng

    2015-07-01

    Super-resolution image reconstruction (SRIR) can improve the fuzzy image's resolution; solve the shortage of the spatial resolution, excessive noise, and low-quality problem of the image. Firstly, we introduce the image degradation model to reveal the essence of super-resolution reconstruction process is an ill-posed inverse problem in mathematics. Secondly, analysis the blurring reason of optical imaging process - light diffraction and small angle scattering is the main reason for the fuzzy; propose an image point spread function estimation method and an improved projection onto convex sets (POCS) algorithm which indicate effectiveness by analyzing the changes between the time domain and frequency domain algorithm in the reconstruction process, pointed out that the improved POCS algorithms based on prior knowledge have the effect to restore and approach the high frequency of original image scene. Finally, we apply the algorithm to reconstruct synchrotron radiation computer tomography (SRCT) image, and then use these images to reconstruct the three-dimensional slice images. Comparing the differences between the original method and super-resolution algorithm, it is obvious that the improved POCS algorithm can restrain the noise and enhance the image resolution, so it is indicated that the algorithm is effective. This study and exploration to super-resolution image reconstruction by improved POCS algorithm is proved to be an effective method. It has important significance and broad application prospects - for example, CT medical image processing and SRCT ceramic sintering analyze of microstructure evolution mechanism.

  8. Image analysis in medical imaging: recent advances in selected examples

    PubMed Central

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments. PMID:21611048

  9. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  10. Imaging analysis of LDEF craters

    NASA Technical Reports Server (NTRS)

    Radicatidibrozolo, F.; Harris, D. W.; Chakel, J. A.; Fleming, R. H.; Bunch, T. E.

    1991-01-01

    Two small craters in Al from the Long Duration Exposure Facility (LDEF) experiment tray A11E00F (no. 74, 119 micron diameter and no. 31, 158 micron diameter) were analyzed using Auger electron spectroscopy (AES), time-of-flight secondary ion mass spectroscopy (TOF-SIMS), low voltage scanning electron microscopy (LVSEM), and SEM energy dispersive spectroscopy (EDS). High resolution images and sensitive elemental and molecular analysis were obtained with this combined approach. The result of these analyses are presented.

  11. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter

    PubMed Central

    Zhu, Ganzheng; Li, Siqi; Gong, Shang; Yang, Benqiang; Zhang, Libo

    2016-01-01

    Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR) pathological image enhancement method based on improved bias field correction and guided image filter (GIF). Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E) stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR) image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work. PMID:28116303

  12. HDR Pathological Image Enhancement Based on Improved Bias Field Correction and Guided Image Filter.

    PubMed

    Sun, Qingjiao; Jiang, Huiyan; Zhu, Ganzheng; Li, Siqi; Gong, Shang; Yang, Benqiang; Zhang, Libo

    2016-01-01

    Pathological image enhancement is a significant topic in the field of pathological image processing. This paper proposes a high dynamic range (HDR) pathological image enhancement method based on improved bias field correction and guided image filter (GIF). Firstly, a preprocessing including stain normalization and wavelet denoising is performed for Haematoxylin and Eosin (H and E) stained pathological image. Then, an improved bias field correction model is developed to enhance the influence of light for high-frequency part in image and correct the intensity inhomogeneity and detail discontinuity of image. Next, HDR pathological image is generated based on least square method using low dynamic range (LDR) image, H and E channel images. Finally, the fine enhanced image is acquired after the detail enhancement process. Experiments with 140 pathological images demonstrate the performance advantages of our proposed method as compared with related work.

  13. Fast image analysis in polarization SHG microscopy.

    PubMed

    Amat-Roldan, Ivan; Psilodimitrakopoulos, Sotiris; Loza-Alvarez, Pablo; Artigas, David

    2010-08-02

    Pixel resolution polarization-sensitive second harmonic generation (PSHG) imaging has been recently shown as a promising imaging modality, by largely enhancing the capabilities of conventional intensity-based SHG microscopy. PSHG is able to obtain structural information from the elementary SHG active structures, which play an important role in many biological processes. Although the technique is of major interest, acquiring such information requires long offline processing, even with current computers. In this paper, we present an approach based on Fourier analysis of the anisotropy signature that allows processing the PSHG images in less than a second in standard single core computers. This represents a temporal improvement of several orders of magnitude compared to conventional fitting algorithms. This opens up the possibility for fast PSHG information with the subsequent benefit of potential use in medical applications.

  14. Image Inpainting Methods Evaluation and Improvement

    PubMed Central

    Vreja, Raluca

    2014-01-01

    With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects. PMID:25136700

  15. Improving image segmentation by learning region affinities

    SciTech Connect

    Prasad, Lakshman; Yang, Xingwei; Latecki, Longin J

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  16. Image inpainting methods evaluation and improvement.

    PubMed

    Vreja, Raluca; Brad, Remus

    2014-01-01

    With the upgrowing of digital processing of images and film archiving, the need for assisted or unsupervised restoration required the development of a series of methods and techniques. Among them, image inpainting is maybe the most impressive and useful. Based on partial derivative equations or texture synthesis, many other hybrid techniques have been proposed recently. The need for an analytical comparison, beside the visual one, urged us to perform the studies shown in the present paper. Starting with an overview of the domain, an evaluation of the five methods was performed using a common benchmark and measuring the PSNR. Conclusions regarding the performance of the investigated algorithms have been presented, categorizing them in function of the restored image structure. Based on these experiments, we have proposed an adaptation of Oliveira's and Hadhoud's algorithms, which are performing well on images with natural defects.

  17. Institutional Image: How to Define, Improve, Market It.

    ERIC Educational Resources Information Center

    Topor, Robert S.

    Advice for colleges on how to identify, develop, and communicate a positive image for the institution is offered in this handbook. The use of market research techniques to measure image is discussed along with advice on how to improve an image so that it contributes to a unified marketing plan. The first objective is to create and communicate some…

  18. Functional magnetic resonance imaging of awake monkeys: some approaches for improving imaging quality

    PubMed Central

    Chen, Gang; Wang, Feng; Dillenburger, Barbara C.; Friedman, Robert M.; Chen, Li M.; Gore, John C.; Avison, Malcolm J.; Roe, Anna W.

    2011-01-01

    Functional magnetic resonance imaging (fMRI), at high magnetic field strength can suffer from serious degradation of image quality because of motion and physiological noise, as well as spatial distortions and signal losses due to susceptibility effects. Overcoming such limitations is essential for sensitive detection and reliable interpretation of fMRI data. These issues are particularly problematic in studies of awake animals. As part of our initial efforts to study functional brain activations in awake, behaving monkeys using fMRI at 4.7T, we have developed acquisition and analysis procedures to improve image quality with encouraging results. We evaluated the influence of two main variables on image quality. First, we show how important the level of behavioral training is for obtaining good data stability and high temporal signal-to-noise ratios. In initial sessions, our typical scan session lasted 1.5 hours, partitioned into short (<10 minutes) runs. During reward periods and breaks between runs, the monkey exhibited movements resulting in considerable image misregistrations. After a few months of extensive behavioral training, we were able to increase the length of individual runs and the total length of each session. The monkey learned to wait until the end of a block for fluid reward, resulting in longer periods of continuous acquisition. Each additional 60 training sessions extended the duration of each session by 60 minutes, culminating, after about 140 training sessions, in sessions that last about four hours. As a result, the average translational movement decreased from over 500 μm to less than 80 μm, a displacement close to that observed in anesthetized monkeys scanned in a 7 T horizontal scanner. Another major source of distortion at high fields arises from susceptibility variations. To reduce such artifacts, we used segmented gradient-echo echo-planar imaging (EPI) sequences. Increasing the number of segments significantly decreased susceptibility

  19. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  20. Improved Atmospheric Surface Pressure Analysis

    NASA Astrophysics Data System (ADS)

    Van den Dool, H. M.

    2011-12-01

    It has been popular to remove upfront the global atmosphere's surface pressure, as known from a weather center's analysis, from mass signals measured by satellites such as Grace, so as to use the residual satellite data to focus exclusively on the lesser known variable mass signals from the ocean, continental hydrology, and snow&ice. The approach is obviously limited by the precision by which meteorological centers know surface pressure. This precision is improving! The purpose of this presentation is to discuss the improvements made in recent Reanalyses (1979-present) of surface pressure in comparison to older analyses of the same. These improvements include also great technical advance, like higher spatial and temporal resolution that allows for a superior description of atmospheric tides. Of course the mass redistribution and mass balance of the atmosphere is interesting for its own sake, and some of those results will be presented as well.

  1. An improved Bayesian matting method based on image statistic characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Luo, Siwei; Wu, Lina

    2015-03-01

    Image matting is an important task in image and video editing and has been studied for more than 30 years. In this paper we propose an improved interactive matting method. Starting from a coarse user-guided trimap, we first perform a color estimation based on texture and color information and use the result to refine the original trimap. Then with the new trimap, we apply soft matting process which is improved Bayesian matting with smoothness constraints. Experimental results on natural image show that this method is useful, especially for the images have similar texture feature in the background or the images which is hard to give a precise trimap.

  2. Grid computing in image analysis.

    PubMed

    Kayser, Klaus; Görtler, Jürgen; Borkenfeld, Stephan; Kayser, Gian

    2011-01-01

    Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis.

  3. A performance analysis system for MEMS using automated imaging methods

    SciTech Connect

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  4. Determining optimal medical image compression: psychometric and image distortion analysis

    PubMed Central

    2012-01-01

    Background Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. Methods To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. Results When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. Conclusion It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original. PMID:22849336

  5. Determining optimal medical image compression: psychometric and image distortion analysis.

    PubMed

    Flint, Alexander C

    2012-07-31

    Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original.

  6. Frequency Diversity for Improving Synthetic Aperture Radar Imaging

    DTIC Science & Technology

    2009-03-01

    Frequency Diversity for Improving Synthetic Aperture Radar Imaging DISSERTATION Jawad Farooq, Major, USAF AFIT/DEE/ENG/09-04 DEPARTMENT OF THE AIR...Department of Defense, or the United States Government. AFIT/DEE/ENG/09-04 Frequency Diversity for Improving Synthetic Aperture Radar Imaging DISSERTATION...aperture radar (SAR) is a critical battlefield enabler as it provides imagery during day or night and in all-weather conditions. SAR image resolution

  7. Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis

    DTIC Science & Technology

    2016-10-01

    find out if radiomic features extracted from CT images can identify patients with high and low TILs in non-small cell lung cancer (NSCLC). Methods...AWARD NUMBER: W81XWH-15-1-0613 TITLE: Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis PRINCIPAL INVESTIGATOR: Dr...4. TITLE AND SUBTITLE Fusing MRI and Mechanical Imaging for Improved Prostate Cancer Diagnosis 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  8. Reconstruction algorithm for improved ultrasound image quality.

    PubMed

    Madore, Bruno; Meral, F Can

    2012-02-01

    A new algorithm is proposed for reconstructing raw RF data into ultrasound images. Previous delay-and-sum beamforming reconstruction algorithms are essentially one-dimensional, because a sum is performed across all receiving elements. In contrast, the present approach is two-dimensional, potentially allowing any time point from any receiving element to contribute to any pixel location. Computer-intensive matrix inversions are performed once, in advance, to create a reconstruction matrix that can be reused indefinitely for a given probe and imaging geometry. Individual images are generated through a single matrix multiplication with the raw RF data, without any need for separate envelope detection or gridding steps. Raw RF data sets were acquired using a commercially available digital ultrasound engine for three imaging geometries: a 64-element array with a rectangular field-of- view (FOV), the same probe with a sector-shaped FOV, and a 128-element array with rectangular FOV. The acquired data were reconstructed using our proposed method and a delay- and-sum beamforming algorithm for comparison purposes. Point spread function (PSF) measurements from metal wires in a water bath showed that the proposed method was able to reduce the size of the PSF and its spatial integral by about 20 to 38%. Images from a commercially available quality-assurance phantom had greater spatial resolution and contrast when reconstructed with the proposed approach.

  9. An improved adaptive IHS method for image fusion

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    2015-12-01

    An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.

  10. Improving night sky star image processing algorithm for star sensors.

    PubMed

    Arbabmir, Mohammad Vali; Mohammadi, Seyyed Mohammad; Salahshour, Sadegh; Somayehee, Farshad

    2014-04-01

    In this paper, the night sky star image processing algorithm, consisting of image preprocessing, star pattern recognition, and centroiding steps, is improved. It is shown that the proposed noise reduction approach can preserve more necessary information than other frequently used approaches. It is also shown that the proposed thresholding method unlike commonly used techniques can properly perform image binarization, especially in images with uneven illumination. Moreover, the higher performance rate and lower average centroiding estimation error of near 0.045 for 400 simulated images compared to other algorithms show the high capability of the proposed night sky star image processing algorithm.

  11. Improving image reconstruction of bioluminescence imaging using a priori information from ultrasound imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jayet, Baptiste; Ahmad, Junaid; Taylor, Shelley L.; Hill, Philip J.; Dehghani, Hamid; Morgan, Stephen P.

    2017-03-01

    Bioluminescence imaging (BLI) is a commonly used imaging modality in biology to study cancer in vivo in small animals. Images are generated using a camera to map the optical fluence emerging from the studied animal, then a numerical reconstruction algorithm is used to locate the sources and estimate their sizes. However, due to the strong light scattering properties of biological tissues, the resolution is very limited (around a few millimetres). Therefore obtaining accurate information about the pathology is complicated. We propose a combined ultrasound/optics approach to improve accuracy of these techniques. In addition to the BLI data, an ultrasound probe driven by a scanner is used for two main objectives. First, to obtain a pure acoustic image, which provides structural information of the sample. And second, to alter the light emission by the bioluminescent sources embedded inside the sample, which is monitored using a high speed optical detector (e.g. photomultiplier tube). We will show that this last measurement, used in conjunction with the ultrasound data, can provide accurate localisation of the bioluminescent sources. This can be used as a priori information by the numerical reconstruction algorithm, greatly increasing the accuracy of the BLI image reconstruction as compared to the image generated using only BLI data.

  12. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  13. Improved microgrid arrangement for integrated imaging polarimeters.

    PubMed

    LeMaster, Daniel A; Hirakawa, Keigo

    2014-04-01

    For almost 20 years, microgrid polarimetric imaging systems have been built using a 2×2 repeating pattern of polarization analyzers. In this Letter, we show that superior spatial resolution is achieved over this 2×2 case when the analyzers are arranged in a 2×4 repeating pattern. This unconventional result, in which a more distributed sampling pattern results in finer spatial resolution, is also achieved without affecting the conditioning of the polarimetric data-reduction matrix. Proof is provided theoretically and through Stokes image reconstruction of synthesized data.

  14. Advanced machine vision inspection of x-ray images for improved productivity

    SciTech Connect

    Novini, A.

    1995-12-31

    The imaging media has been, for the most part, x-ray sensitive photographic film or film coupled to scintillation screens. Electronic means of imaging through television technology has provided a cost effective, alternative method in many applications for the past three to four decades. Typically, film provides higher resolution and higher interscene dynamic range and mechanical flexibility suitable to image complex shape objects. However, this is offset by a labor intensive task of x-ray photography and processing which eliminate the ability to image in real time. The electronic means are typically achieved through an x-ray source, x-ray to visible light converter, and a television (TV) camera. The images can be displayed on a TV monitor for operator viewing or go to a computer system for capture, image processing, and automatic analysis. Although the present state of the art for electronic x-ray imaging does not quite produce the quality possible through film, it provides a level of flexibility and overall improved productivity not achievable through film. Further, electronic imaging means are improving in image quality as time goes on, and it is expected they will match or surpass film in the next decade. For many industrial applications, the present state of the art in electronic imaging provides more than adequate results. This type of imaging also allows for automatic analysis when practical. The first step in the automatic analysis chain is to obtain the best possible image. This requires an in-depth understanding of the x-ray imaging physics and techniques by the system designer. As previously mentioned, these images may go through some enhancement steps to further improve the detectability of defects. Almost always, image spatial resolution of 512 {times} 512 or greater is used with gray level information content of 8 bits (256 gray levels) or more for preservation of image quality. There are many methods available for analyzing the images.

  15. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  16. Improved deep two-photon calcium imaging in vivo.

    PubMed

    Birkner, Antje; Tischbirek, Carsten H; Konnerth, Arthur

    2016-12-21

    Two-photon laser scanning calcium imaging has emerged as a useful method for the exploration of neural function and structure at the cellular and subcellular level in vivo. The applications range from imaging of subcellular compartments such as dendrites, spines and axonal boutons up to the functional analysis of large neuronal or glial populations. However, the depth penetration is often limited to a few hundred micrometers, corresponding, for example, to the upper cortical layers of the mouse brain. Light scattering and aberrations originating from refractive index inhomogeneties of the tissue are the reasons for these limitations. The depth penetration of two-photon imaging can be enhanced through various approaches, such as the implementation of adaptive optics, the use of three-photon excitation and/or labeling cells with red-shifted genetically encoded fluorescent sensors. However, most of the approaches used so far require the implementation of new instrumentation and/or time consuming staining protocols. Here we present a simple approach that can be readily implemented in combination with standard two-photon microscopes. The method involves an optimized protocol for depth-restricted labeling with the red-shifted fluorescent calcium indicator Cal-590 and benefits from the use of ultra-short laser pulses. The approach allows in vivo functional imaging of neuronal populations with single cell resolution in all six layers of the mouse cortex. We demonstrate that stable recordings in deep cortical layers are not restricted to anesthetized animals but are well feasible in awake, behaving mice. We anticipate that the improved depth penetration will be beneficial for two-photon functional imaging in larger species, such as non-human primates.

  17. Improved image quality for x-ray CT imaging of gel dosimeters

    SciTech Connect

    Kakakhel, M. B.; Kairn, T.; Kenny, J.; Trapp, J. V.

    2011-09-15

    Purpose: This study provides a simple method for improving precision of x-ray computed tomography (CT) scans of irradiated polymer gel dosimetry. The noise affecting CT scans of irradiated gels has been an impediment to the use of clinical CT scanners for gel dosimetry studies. Methods: In this study, it is shown that multiple scans of a single PAGAT gel dosimeter can be used to extrapolate a ''zero-scan'' image which displays a similar level of precision to an image obtained by averaging multiple CT images, without the compromised dose measurement resulting from the exposure of the gel to radiation from the CT scanner. Results: When extrapolating the zero-scan image, it is shown that exponential and simple linear fits to the relationship between Hounsfield unit and scan number, for each pixel in the image, provide an accurate indication of gel density. Conclusions: It is expected that this work will be utilized in the analysis of three-dimensional gel volumes irradiated using complex radiotherapy treatments.

  18. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework.

    PubMed

    Guo, Qing; Dong, Fangmin; Sun, Shuifa; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images.

  19. Improved bat algorithm applied to multilevel image thresholding.

    PubMed

    Alihodzic, Adis; Tuba, Milan

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  20. Improved Rotating Kernel Transformation Based Contourlet Domain Image Denoising Framework

    PubMed Central

    Guo, Qing; Dong, Fangmin; Ren, Xuhong; Feng, Shiyu; Gao, Bruce Zhi

    2016-01-01

    A contourlet domain image denoising framework based on a novel Improved Rotating Kernel Transformation is proposed, where the difference of subbands in contourlet domain is taken into account. In detail: (1). A novel Improved Rotating Kernel Transformation (IRKT) is proposed to calculate the direction statistic of the image; The validity of the IRKT is verified by the corresponding extracted edge information comparing with the state-of-the-art edge detection algorithm. (2). The direction statistic represents the difference between subbands and is introduced to the threshold function based contourlet domain denoising approaches in the form of weights to get the novel framework. The proposed framework is utilized to improve the contourlet soft-thresholding (CTSoft) and contourlet bivariate-thresholding (CTB) algorithms. The denoising results on the conventional testing images and the Optical Coherence Tomography (OCT) medical images show that the proposed methods improve the existing contourlet based thresholding denoising algorithm, especially for the medical images. PMID:27148597

  1. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  2. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  3. Improved Image-Guided Laparoscopic Prostatectomy

    DTIC Science & Technology

    2011-08-01

    pancreas, lymph nodes, and thyroid .[1,2,3,4,5] Ultrasound elastography is thus a very promising image guidance method for robot-assisted procedures...Li, Z., Zhang, X., Chen, M., and Luo, Z., “Real-time ultrasound elastography in the differential diagnosis of benign and malignant thyroid nodules...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Laparoscopic Ultrasound

  4. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  5. Imaging system design for improved information capacity

    NASA Technical Reports Server (NTRS)

    Fales, C. L.; Huck, F. O.; Samms, R. W.

    1984-01-01

    Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.

  6. Imaging system design for improved information capacity

    NASA Technical Reports Server (NTRS)

    Fales, C. L.; Huck, F. O.; Samms, R. W.

    1984-01-01

    Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.

  7. Digital subtraction angiography: principles and pitfalls of image improvement techniques.

    PubMed

    Levin, D C; Schapiro, R M; Boxt, L M; Dunham, L; Harrington, D P; Ergun, D L

    1984-09-01

    The technology of imaging methods in digital subtraction angiography (DSA) is discussed in detail. Areas covered include function of the video camera in both interlaced and sequential scan modes, digitization by the analog-to-digital converter, logarithmic signal processing, dose rates, and acquisition of images using frame integration and pulsed-sequential techniques. Also discussed are various methods of improving image content and quality by both hardware and software modifications. These include the development of larger image intensifiers, larger matrices, video camera improvements, reregistration, hybrid subtraction, matched filtering, recursive filtering, DSA tomography, and edge enhancement.

  8. Multispectral Image Analysis of Hurricane Gilbert

    DTIC Science & Technology

    1989-05-19

    Classification) Multispectral Image Analysis of Hurrican Gilbert (unclassified) 12. PERSONAL AUTHOR(S) Kleespies, Thomas J. (GL/LYS) 13a. TYPE OF REPORT...cloud top height. component, of tle image in the red channel, and similarly for the green and blue channels. Multispectral Muti.pectral image analysis can...However, there seems to be few references to the human range of vision, the selection as to which mllti.pp.tral image analysis of scenes or

  9. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  10. Vibration signature analysis of AFM images

    SciTech Connect

    Joshi, G.A.; Fu, J.; Pandit, S.M.

    1995-12-31

    Vibration signature analysis has been commonly used for the machine condition monitoring and the control of errors. However, it has been rarely employed for the analysis of the precision instruments such as an atomic force microscope (AFM). In this work, an AFM was used to collect vibration data from a sample positioning stage under different suspension and support conditions. Certain structural characteristics of the sample positioning stage show up as a result of the vibration signature analysis of the surface height images measured using an AFM. It is important to understand these vibration characteristics in order to reduce vibrational uncertainty, improve the damping and structural design, and to eliminate the imaging imperfections. The choice of method applied for vibration analysis may affect the results. Two methods, the data dependent systems (DDS) analysis and the Welch`s periodogram averaging method were investigated for application to this problem. Both techniques provide smooth spectrum plots from the data. Welch`s periodogram provides a coarse resolution as limited by the number of samples and requires a choice of window to be decided subjectively by the user. The DDS analysis provides sharper spectral peaks at a much higher resolution and a much lower noise floor. A decomposition of the signal variance in terms of the frequencies is provided as well. The technique is based on an objective model adequacy criterion.

  11. Statistical analysis of biophoton image

    NASA Astrophysics Data System (ADS)

    Wang, Susheng

    1998-08-01

    A photon count image system has been developed to obtain the ultra-weak bioluminescence image. The photon images of some plant, animal and human hand have been detected. The biophoton image is different from usual image. In this paper three characteristics of biophoton image are analyzed. On the basis of these characteristics the detected probability and detected limit of photon count image system, detected limit of biophoton image have been discussed. These researches provide scientific basis for experiments design and photon image processing.

  12. CS-based fast ultrasound imaging with improved FISTA algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Jie; He, Yugao; Shi, Guangming; Han, Tingyu

    2015-08-01

    In ultrasound imaging system, the wave emission and data acquisition is time consuming, which can be solved by adopting the plane wave as the transmitted signal, and the compressed sensing (CS) theory for data acquisition and image reconstruction. To overcome the very high computation complexity caused by introducing CS into ultrasound imaging, in this paper, we propose an improvement of the fast iterative shrinkage-thresholding algorithm (FISTA) to achieve the fast reconstruction of the ultrasound imaging, in which a modified setting is done with the parameter of step size for each iteration. Further, the GPU strategy is designed for the proposed algorithm, to guarantee the real time implementation of imaging. The simulation results show that the GPU-based image reconstruction algorithm can achieve the fast ultrasound imaging without damaging the quality of image.

  13. Common feature discriminant analysis for matching infrared face images to optical face images.

    PubMed

    Li, Zhifeng; Gong, Dihong; Qiao, Yu; Tao, Dacheng

    2014-06-01

    In biometrics research and industry, it is critical yet a challenge to match infrared face images to optical face images. The major difficulty lies in the fact that a great discrepancy exists between the infrared face image and corresponding optical face image because they are captured by different devices (optical imaging device and infrared imaging device). This paper presents a new approach called common feature discriminant analysis to reduce this great discrepancy and improve optical-infrared face recognition performance. In this approach, a new learning-based face descriptor is first proposed to extract the common features from heterogeneous face images (infrared face images and optical face images), and an effective matching method is then applied to the resulting features to obtain the final decision. Extensive experiments are conducted on two large and challenging optical-infrared face data sets to show the superiority of our approach over the state-of-the-art.

  14. RAMAN spectroscopy imaging improves the diagnosis of papillary thyroid carcinoma

    PubMed Central

    Rau, Julietta V.; Graziani, Valerio; Fosca, Marco; Taffon, Chiara; Rocchia, Massimiliano; Crucitti, Pierfilippo; Pozzilli, Paolo; Onetti Muda, Andrea; Caricato, Marco; Crescenzi, Anna

    2016-01-01

    Recent investigations strongly suggest that Raman spectroscopy (RS) can be used as a clinical tool in cancer diagnosis to improve diagnostic accuracy. In this study, we evaluated the efficiency of Raman imaging microscopy to discriminate between healthy and neoplastic thyroid tissue, by analyzing main variants of Papillary Thyroid Carcinoma (PTC), the most common type of thyroid cancer. We performed Raman imaging of large tissue areas (from 100 × 100 μm2 up to 1 × 1 mm2), collecting 38 maps containing about 9000 Raman spectra. Multivariate statistical methods, including Linear Discriminant Analysis (LDA), were applied to translate Raman spectra differences between healthy and PTC tissues into diagnostically useful information for a reliable tissue classification. Our study is the first demonstration of specific biochemical features of the PTC profile, characterized by significant presence of carotenoids with respect to the healthy tissue. Moreover, this is the first evidence of Raman spectra differentiation between classical and follicular variant of PTC, discriminated by LDA with high efficiency. The combined histological and Raman microscopy analyses allow clear-cut integration of morphological and biochemical observations, with dramatic improvement of efficiency and reliability in the differential diagnosis of neoplastic thyroid nodules, paving the way to integrative findings for tumorigenesis and novel therapeutic strategies. PMID:27725756

  15. RAMAN spectroscopy imaging improves the diagnosis of papillary thyroid carcinoma

    NASA Astrophysics Data System (ADS)

    Rau, Julietta V.; Graziani, Valerio; Fosca, Marco; Taffon, Chiara; Rocchia, Massimiliano; Crucitti, Pierfilippo; Pozzilli, Paolo; Onetti Muda, Andrea; Caricato, Marco; Crescenzi, Anna

    2016-10-01

    Recent investigations strongly suggest that Raman spectroscopy (RS) can be used as a clinical tool in cancer diagnosis to improve diagnostic accuracy. In this study, we evaluated the efficiency of Raman imaging microscopy to discriminate between healthy and neoplastic thyroid tissue, by analyzing main variants of Papillary Thyroid Carcinoma (PTC), the most common type of thyroid cancer. We performed Raman imaging of large tissue areas (from 100 × 100 μm2 up to 1 × 1 mm2), collecting 38 maps containing about 9000 Raman spectra. Multivariate statistical methods, including Linear Discriminant Analysis (LDA), were applied to translate Raman spectra differences between healthy and PTC tissues into diagnostically useful information for a reliable tissue classification. Our study is the first demonstration of specific biochemical features of the PTC profile, characterized by significant presence of carotenoids with respect to the healthy tissue. Moreover, this is the first evidence of Raman spectra differentiation between classical and follicular variant of PTC, discriminated by LDA with high efficiency. The combined histological and Raman microscopy analyses allow clear-cut integration of morphological and biochemical observations, with dramatic improvement of efficiency and reliability in the differential diagnosis of neoplastic thyroid nodules, paving the way to integrative findings for tumorigenesis and novel therapeutic strategies.

  16. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; hide

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  17. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  18. An improvement to the diffraction-enhanced imaging method that permits imaging of dynamic systems

    NASA Astrophysics Data System (ADS)

    Siu, K. K. W.; Kitchen, M. J.; Pavlov, K. M.; Gillam, J. E.; Lewis, R. A.; Uesugi, K.; Yagi, N.

    2005-08-01

    We present an improvement to the diffraction-enhanced imaging (DEI) method that permits imaging of moving samples or other dynamic systems in real time. The method relies on the use of a thin Bragg analyzer crystal and simultaneous acquisition of the multiple images necessary for the DEI reconstruction of the apparent absorption and refraction images. These images are conventionally acquired at multiple points on the reflectivity curve of an analyzer crystal which presents technical challenges and precludes imaging of moving subjects. We have demonstrated the potential of the technique by taking DEI "movies" of an artificially moving mouse leg joint, acquired at the Biomedical Imaging Centre at SPring-8, Japan.

  19. Using photoconductivity to improve image-tube gating speeds

    NASA Astrophysics Data System (ADS)

    Gobby, P. L.; Yates, G. J.; Jaramillo, S. A.; Noel, B. W.; Aeby, I.

    1983-08-01

    A technique using the photoconducting of semiconductor and insulator photocathodes to improve image tube gating speeds is presented. A simple model applicable to the technique and a preliminary experiment are described.

  20. Improvement of fringe quality at LDA measuring volume using compact two hololens imaging system

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhijit; Nirala, A. K.

    2015-03-01

    Design, analysis and construction of an LDA optical setup using conventional as well as compact two hololens imaging system have been performed. Fringes formed at measurement volume by both the imaging systems have been recorded. After experimentally analyzing these fringes, it is found that fringes obtained using compact two hololens imaging system get improved both qualitatively and quantitatively compared to that obtained using conventional imaging system. Hence it is concluded that use of the compact two hololens imaging system for making LDA optical setup is a better choice over the conventional one.

  1. Improved Dot Diffusion For Image Halftoning

    DTIC Science & Technology

    1999-01-01

    The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is

  2. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  3. An improved Chebyshev distance metric for clustering medical images

    NASA Astrophysics Data System (ADS)

    Mousa, Aseel; Yusof, Yuhanis

    2015-12-01

    A metric or distance function is a function which defines a distance between elements of a set. In clustering, measuring the similarity between objects has become an important issue. In practice, there are various similarity measures used and this includes the Euclidean, Manhattan and Minkowski. In this paper, an improved Chebyshev similarity measure is introduced to replace existing metrics (such as Euclidean and standard Chebyshev) in clustering analysis. The proposed measure is later realized in analyzing blood cancer images. Results demonstrate that the proposed measure produces the smallest objective function value and converge at the lowest number of iteration. Hence, it can be concluded that the proposed distance metric contribute in producing better clusters.

  4. High resolution PET breast imager with improved detection efficiency

    DOEpatents

    Majewski, Stanislaw

    2010-06-08

    A highly efficient PET breast imager for detecting lesions in the entire breast including those located close to the patient's chest wall. The breast imager includes a ring of imaging modules surrounding the imaged breast. Each imaging module includes a slant imaging light guide inserted between a gamma radiation sensor and a photodetector. The slant light guide permits the gamma radiation sensors to be placed in close proximity to the skin of the chest wall thereby extending the sensitive region of the imager to the base of the breast. Several types of photodetectors are proposed for use in the detector modules, with compact silicon photomultipliers as the preferred choice, due to its high compactness. The geometry of the detector heads and the arrangement of the detector ring significantly reduce dead regions thereby improving detection efficiency for lesions located close to the chest wall.

  5. Improved color interpolation method based on Bayer image

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2012-10-01

    Image sensors are important components of lunar exploration device. Considering volume and cost, image sensors generally adopt a single CCD or CMOS at the present time, and the surface of the sensor is covered with a layer of color filter array(CFA), which is usually Bayer CFA. In the Bayer CFA, each pixel can only get one of tricolor, so it is necessary to carry out color interpolation in order to get the full color image. An improved Bayer image interpolation method is presented, which is novel, practical, and also easy to be realized. The results of experiments to prove the effect of the interpolation are shown. Comparing with classic methods, this method can find edge of image more accurately, reduce the saw tooth phenomenon in the edge area, and keep the image smooth in other area. This method is applied successfully in a certain exploration imaging system.

  6. An improved FCM medical image segmentation algorithm based on MMTD.

    PubMed

    Zhou, Ningning; Yang, Tingting; Zhang, Shaobai

    2014-01-01

    Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM) is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD) and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.

  7. Photoacoustic imaging with rotational compounding for improved signal detection

    NASA Astrophysics Data System (ADS)

    Forbrich, A.; Heinmiller, A.; Jose, J.; Needles, A.; Hirson, D.

    2015-03-01

    Photoacoustic microscopy with linear array transducers enables fast two-dimensional, cross-sectional photoacoustic imaging. Unfortunately, most ultrasound transducers are only sensitive to a very narrow angular acceptance range and preferentially detect signals along the main axis of the transducer. This often limits photoacoustic microscopy from detecting blood vessels which can extend in any direction. Rotational compounded photoacoustic imaging is introduced to overcome the angular-dependency of detecting acoustic signals with linear array transducers. An integrate system is designed to control the image acquisition using a linear array transducer, a motorized rotational stage, and a motorized lateral stage. Images acquired at multiple angular positions are combined to form a rotational compounded image. We found that the signal-to-noise ratio improved, while the sidelobe and reverberation artifacts were substantially reduced. Furthermore, the rotational compounded images of excised kidneys and hindlimb tumors of mice showed more structural information compared with any single image collected.

  8. Restriction spectrum imaging improves MRI-based prostate cancer detection

    PubMed Central

    McCammack, Kevin C.; Schenker-Ahmed, Natalie M.; White, Nathan S.; Best, Shaun R.; Marks, Robert M.; Heimbigner, Jared; Kane, Christopher J.; Parsons, J. Kellogg; Kuperman, Joshua M.; Bartsch, Hauke; Desikan, Rahul S.; Rakow-Penner, Rebecca A.; Liss, Michael A.; Margolis, Daniel J. A.; Raman, Steven S.; Shabaik, Ahmed; Dale, Anders M.; Karow, David S.

    2017-01-01

    Purpose To compare the diagnostic performance of restriction spectrum imaging (RSI), with that of conventional multi-parametric (MP) magnetic resonance imaging (MRI) for prostate cancer (PCa) detection in a blinded reader-based format. Methods Three readers independently evaluated 100 patients (67 with proven PCa) who underwent MP-MRI and RSI within 6 months of systematic biopsy (N = 67; 23 with targeting performed) or prostatectomy (N = 33). Imaging was performed at 3 Tesla using a phased-array coil. Readers used a five-point scale estimating the likelihood of PCa present in each prostate sextant. Evaluation was performed in two separate sessions, first using conventional MP-MRI alone then immediately with MP-MRI and RSI in the same session. Four weeks later, another scoring session used RSI and T2-weighted imaging (T2WI) without conventional diffusion-weighted or dynamic contrast-enhanced imaging. Reader interpretations were then compared to prostatectomy data or biopsy results. Receiver operating characteristic curves were performed, with area under the curve (AUC) used to compare across groups. Results MP-MRI with RSI achieved higher AUCs compared to MP-MRI alone for identifying high-grade (Gleason score greater than or equal to 4 + 3=7) PCa (0.78 vs. 0.70 at the sextant level; P < 0.001 and 0.85 vs. 0.79 at the hemigland level; P = 0.04). RSI and T2WI alone achieved AUCs similar to MP-MRI for high-grade PCa (0.71 vs. 0.70 at the sextant level). With hemigland analysis, high-grade disease results were similar when comparing RSI + T2WI with MP-MRI, although with greater AUCs compared to the sextant analysis (0.80 vs. 0.79). Conclusion Including RSI with MP-MRI improves PCa detection compared to MP-MRI alone, and RSI with T2WI achieves similar PCa detection as MP-MRI. PMID:26910114

  9. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  10. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  11. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  12. An Expert Image Analysis System For Chromosome Analysis Application

    NASA Astrophysics Data System (ADS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted.

  13. Improved Image Reconstruction for Partial Fourier Gradient-Echo Echo-Planar Imaging (EPI)

    PubMed Central

    Chen, Nan-kuei; Oshio, Koichi; Panych, Lawrence P.

    2009-01-01

    The partial Fourier gradient-echo echo planar imaging (EPI) technique makes it possible to acquire high-resolution functional MRI (fMRI) data at an optimal echo time. This technique is especially important for fMRI studies at high magnetic fields, where the optimal echo time is short and may not be achieved with a full Fourier acquisition scheme. In addition, it has been shown that partial Fourier EPI provides better anatomic resolvability than full Fourier EPI. However, the partial Fourier gradient-echo EPI may be degraded by artifacts that are not usually seen in other types of imaging. Those unique artifacts in partial Fourier gradient-echo EPI, to our knowledge, have not yet been systematically evaluated. Here we use the k-space energy spectrum analysis method to understand and characterize two types of partial Fourier EPI artifacts. Our studies show that Type 1 artifact, originating from k-space energy loss, cannot be corrected with pure postprocessing, and Type 2 artifact can be eliminated with an improved reconstruction method. We propose a novel algorithm, that combines images obtained from two or more reconstruction schemes guided by k-space energy spectrum analysis, to generate partial Fourier EPI with greatly reduced Type 2 artifact. Quality control procedures for avoiding Type 1 artifact in partial Fourier EPI are also discussed. PMID:18383294

  14. Image analysis: a consumer's guide.

    PubMed

    Meyer, F

    1983-01-01

    The last years have seen an explosion of systems in image analysis. It is hard for the pathologist or the cytologist to make the right choice of equipment. All machines are stupid, and the only valuable thing is the human work put into it. So make your benefit of the work other people have done for you. Chose a method largely used on many systems and which has proved fertile in many domains and not only for your specific to day's application: Mathematical Morphology, to which are to be added the linear convolutions present on all machines is a strong candidate for becoming such a method. The paper illustrates a working day of an ideal system: research and diagnostic directed work during the working hours, automatic screening of cervical (or other) smears during night.

  15. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  16. An improved coding technique for image encryption and key management

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Ma, Jie; Hu, Jiasheng

    2005-02-01

    An improved chaotic algorithm for image encryption on the basis of conventional chaotic encryption algorithm is proposed. Two keys are presented in our technique. One is called private key, which is fixed and protected in the system. The other is named assistant key, which is public and transferred with the encrypted image together. For different original image, different assistant key should be chosen so that one could get different encrypted key. The updated encryption algorithm not only can resist a known-plaintext attack, but also offers an effective solution for key management. The analyses and the computer simulations show that the security is improved greatly, and can be easily realized with hardware.

  17. Multimodal medical image fusion using improved multi-channel PCNN.

    PubMed

    Zhao, Yaqian; Zhao, Qinping; Hao, Aimin

    2014-01-01

    Multimodal medical image fusion is a method of integrating information from multiple image formats. Its aim is to provide useful and accurate information for doctors. Multi-channel pulse coupled neural network (m-PCNN) is a recently proposed fusion model. Compared with previous methods, this network can effectively manage various types of medical images. However, it has two drawbacks: lack of control to feed function and low-level automation. The improved multi-channel PCNN proposed in this paper can adjust the impact of feed function by linking strength and adaptively compute the weighting coefficients for each pixel. Experimental results demonstrated the effectiveness of the improved m-PCNN fusion model.

  18. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na(+) or Ca(++) within a single cell, as well as to the analysis of spreading activity (e.g., Ca(++) waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  19. Improvements for Image Compression Using Adaptive Principal Component Extraction (APEX)

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1997-01-01

    The issues of image compression and pattern classification have been a primary focus of researchers among a variety of fields including signal and image processing, pattern recognition, data classification, etc. These issues depend on finding an efficient representation of the source data. In this paper we collate our earlier results where we introduced the application of the. Hilbe.rt scan to a principal component algorithm (PCA) with Adaptive Principal Component Extraction (APEX) neural network model. We apply these technique to medical imaging, particularly image representation and compression. We apply the Hilbert scan to the APEX algorithm to improve results

  20. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  1. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original

  2. Dual-wavelength retinal images denoising algorithm for improving the accuracy of oxygen saturation calculation

    NASA Astrophysics Data System (ADS)

    Xian, Yong-Li; Dai, Yun; Gao, Chun-Ming; Du, Rui

    2017-01-01

    Noninvasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and spectral absorption characteristics of tissue. Retinal images at 570 and 600 nm are simultaneously captured by dual-wavelength retinal oximetry based on fundus camera. SO2 is finally measured after vessel segmentation, image registration, and calculation of optical density ratio of two images. However, image noise can dramatically affect subsequent image processing and SO2 calculation accuracy. The aforementioned problem remains to be addressed. The purpose of this study was to improve image quality and SO2 calculation accuracy by noise analysis and denoising algorithm for dual-wavelength images. First, noise parameters were estimated by mixed Poisson-Gaussian (MPG) noise model. Second, an MPG denoising algorithm which we called variance stabilizing transform (VST) + dual-domain image denoising (DDID) was proposed based on VST and improved dual-domain filter. The results show that VST + DDID is able to effectively remove MPG noise and preserve image edge details. VST + DDID is better than VST + block-matching and three-dimensional filtering, especially in preserving low-contrast details. The following simulation and analysis indicate that MPG noise in the retinal images can lead to erroneously low measurement for SO2, and the denoised images can provide more accurate grayscale values for retinal oximetry.

  3. An improved NAS-RIF algorithm for blind image restoration

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Jiang, Yanbin; Lou, Shuntian

    2007-01-01

    Image restoration is widely applied in many areas, but when operating on images with different scales for the representation of pixel intensity levels or low SNR, the traditional restoration algorithm lacks validity and induces noise amplification, ringing artifacts and poor convergent ability. In this paper, an improved NAS-RIF algorithm is proposed to overcome the shortcomings of the traditional algorithm. The improved algorithm proposes a new cost function which adds a space-adaptive regularization term and a disunity gain of the adaptive filter. In determining the support region, a pre-segmentation is used to form it close to the object in the image. Compared with the traditional algorithm, simulations show that the improved algorithm behaves better convergence, noise resistance and provides a better estimate of original image.

  4. Naval Signal and Image Analysis Conference Report

    DTIC Science & Technology

    1998-02-26

    Arlington Hilton Hotel in Arlington, Virginia. The meeting was by invitation only and consisted of investigators in the ONR Signal and Image Analysis Program...in signal and image analysis . The conference provided an opportunity for technical interaction between academic researchers and Naval scientists and...plan future directions for the ONR Signal and Image Analysis Program as well as informal recommendations to the Program Officer.

  5. Image analysis applications for grain science

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Steele, James L.

    1991-02-01

    Morphometrical features of single grain kernels or particles were used to discriminate two visibly similar wheat varieties foreign material in wheat hardsoft and spring-winter wheat classes and whole from broken corn kernels. Milled fractions of hard and soft wheat were evaluated using textural image analysis. Color image analysis of sound and mold damaged corn kernels yielded high recognition rates. The studies collectively demonstrate the potential for automated classification and assessment of grain quality using image analysis.

  6. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  7. Restoration Of MEX SRC Images For Improved Topography: A New Image Product

    NASA Astrophysics Data System (ADS)

    Duxbury, T. C.

    2012-12-01

    Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived

  8. Improvement in image resolution based on dispersive representation of data

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.

    2016-10-01

    A method for reconstructing the resolution of images, based on selection and optimization of significant local features and sparse representation of processed-image blocks (using optimized low- and high-resolution dictionaries), has been substantiated for the first time. This method, making it possible to improve significantly the resolution of images of various nature, is interpreted physically. A block diagram of the processing system corresponding to the new approach to image reconstruction has been developed. A simulation of the new method for reconstructing images of different physical natures and known algorithms showed an advantage of the new scheme for reconstructing resolution in terms of universally recognized criteria (peak signal-to-noise ratio, mean absolute error, and structural similarity index measure) and in visual comparison of the processed images.

  9. Improving Self-Image of Students. ACSA School Management Digest, Series 1, Number 14. ERIC/CEM Research Analysis Series, Number 41.

    ERIC Educational Resources Information Center

    Mazzarella, Jo Ann

    Research over the last ten years provides overwhelming evidence that the most successful students have strong positive self-concepts. This booklet reviews literature on self-concept and describes many programs designed to improve student self-esteem. The paper begins by noting that although no one understands the order of the cause and effect…

  10. Correlative feature analysis of FFDM images

    NASA Astrophysics Data System (ADS)

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-03-01

    Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)

  11. Microscopy image segmentation tool: robust image data analysis.

    PubMed

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  12. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  13. Free-breathing motion-corrected late-gadolinium-enhancement imaging improves image quality in children.

    PubMed

    Olivieri, Laura; Cross, Russell; O'Brien, Kendall J; Xue, Hui; Kellman, Peter; Hansen, Michael S

    2016-06-01

    The value of late-gadolinium-enhancement (LGE) imaging in the diagnosis and management of pediatric and congenital heart disease is clear; however current acquisition techniques are susceptible to error and artifacts when performed in children because of children's higher heart rates, higher prevalence of sinus arrhythmia, and inability to breath-hold. Commonly used techniques in pediatric LGE imaging include breath-held segmented FLASH (segFLASH) and steady-state free precession-based (segSSFP) imaging. More recently, single-shot SSFP techniques with respiratory motion-corrected averaging have emerged. This study tested and compared single-shot free-breathing LGE techniques with standard segmented breath-held techniques in children undergoing LGE imaging. Thirty-two consecutive children underwent clinically indicated late-enhancement imaging using intravenous gadobutrol 0.15 mmol/kg. Breath-held segSSFP, breath-held segFLASH, and free-breathing single-shot SSFP LGE sequences were performed in consecutive series in each child. Two blinded reviewers evaluated the quality of the images and rated them on a scale of 1-5 (1 = poor, 5 = superior) based on blood pool-myocardial definition, presence of cardiac motion, presence of respiratory motion artifacts, and image acquisition artifact. We used analysis of variance (ANOVA) to compare groups. Patients ranged in age from 9 months to 18 years, with a mean +/- standard deviation (SD) of 13.3 +/- 4.8 years. R-R interval at the time of acquisition ranged 366-1,265 milliseconds (ms) (47-164 beats per minute [bpm]), mean +/- SD of 843+/-231 ms (72+/-21 bpm). Mean +/- SD quality ratings for long-axis imaging for segFLASH, segSSFP and single-shot SSFP were 3.1+/-0.9, 3.4+/-0.9 and 4.0+/-0.9, respectively (P < 0.01 by ANOVA). Mean +/- SD quality ratings for short-axis imaging for segFLASH, segSSFP and single-shot SSFP were 3.4+/-1, 3.8+/-0.9 and 4.3+/-0.7, respectively (P < 0.01 by ANOVA). Single-shot late

  14. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  16. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  17. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  18. Image processing software for imaging spectrometry data analysis

    NASA Astrophysics Data System (ADS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-02-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  19. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2017-05-10

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Improving face image extraction by using deep learning technique

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  1. Improvements in interpretation of posterior capsular opacification (PCO) images

    NASA Astrophysics Data System (ADS)

    Paplinski, Andrew P.; Boyce, James F.; Barman, Sarah A.

    2000-06-01

    We present further improvements to the methods of interpretation of the Posterior Capsular Opacification (PCO) images. These retro-illumination images of the back surface of the implanted lens are used to monitor the state of patient's vision after cataract operation. A common post-surgical complication is opacification of the posterior eye capsule caused by the growth of epithelial cells across the back surface of the capsule. Interpretation of the PCO images is based on their segmentation into transparent image areas and opaque areas, which are affected by the growth of epithelial cells and can be characterized by the increase in the image local variance. This assumption is valid in majority of cases. However, for different materials used for the implanted lenses it sometimes happens that the epithelial cells grow in a way characterized by low variance. In such a case segmentation gives a relatively big error. We describe an application of an anisotropic diffusion equation in a non-linear pre-processing of PCO images. The algorithm preserves the high-variance areas of PCO images and performs a low-pass filtering of small low- variance features. The algorithm maintains a mean value of the variance and guarantees existence of a stable solution and improves segmentation of the PCO images.

  2. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  3. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  4. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  5. Providing information to improve body image and care-seeking behavior of women and girls living with female genital mutilation: A systematic review and meta-analysis.

    PubMed

    Esu, Ekpereonne; Okoye, Ifeyinwa; Arikpo, Iwara; Ejemot-Nwadiaro, Regina; Meremikwu, Martin M

    2017-02-01

    Female genital mutilation (FGM) has become recognized worldwide as an extreme form of violation of the human rights of girls and women. Strategies have been employed to curb the practice. To conduct a systematic review of randomized and nonrandomized studies of the effects of providing educational interventions on the body image and care-seeking behavior of girls and women living with FGM with the view to ending the practice. CENTRAL, MEDLINE, and other databases were searched up to August 10, 2015 without any language restrictions. Studies that provided education to women and/or girls living with any type of FGM or residing in countries where FGM is predominantly practiced were included. Two authors independently screened and collected data. We summarized dichotomous outcomes using odds ratios and evidence was assessed using the GRADE system (Grading of Recommendations Assessment, Development, and Evaluation). Educational interventions resulted in fewer women recommending FGM for their daughters and also reduced the incidence of FGM cases among daughters of women who received the educational interventions. These findings need to be validated with large randomized trials. 42015024637. © 2017 International Federation of Gynecology and Obstetrics. The World Health Organization retains copyright and all other rights in the manuscript of this article as submitted for publication.

  6. An improved dehazing algorithm of aerial high-definition image

    NASA Astrophysics Data System (ADS)

    Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying

    2016-01-01

    For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.

  7. Human eye visual hyperacuity: Controlled diffraction for image resolution improvement

    NASA Astrophysics Data System (ADS)

    Lagunas, A.; Domínguez, O.; Martinez-Conde, S.; Macknik, S. L.; Del-Río, C.

    2017-09-01

    The Human Visual System appears to be using a low number of sensors for image capturing, and furthermore, regarding the physical dimensions of cones—photoreceptors responsible for the sharp central vision—we may realize that these sensors are of a relatively small size and area. Nonetheless, the human eye is capable of resolving fine details thanks to visual hyperacuity and presents an impressive sensitivity and dynamic range when set against conventional digital cameras of similar characteristics. This article is based on the hypothesis that the human eye may be benefiting from diffraction to improve both image resolution and acquisition process. The developed method involves the introduction of a controlled diffraction pattern at an initial stage that enables the use of a limited number of sensors for capturing the image and makes possible a subsequent post-processing to improve the final image resolution.

  8. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  9. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  10. Image-plane processing for improved computer vision

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    The proper combination of optical design with image plane processing, as in the mechanism of human vision, which allows to improve the performance of sensor array imaging systems for edge detection and location was examined. Two dimensional bandpass filtering during image formation, optimizes edge enhancement and minimizes data transmission. It permits control of the spatial imaging system response to tradeoff edge enhancement for sensitivity at low light levels. It is shown that most of the information, up to about 94%, is contained in the signal intensity transitions from which the location of edges is determined for raw primal sketches. Shading the lens transmittance to increase depth of field and using a hexagonal instead of square sensor array lattice to decrease sensitivity to edge orientation improves edge information about 10%.

  11. Digital image enhancement improves diagnosis of nondisplaced proximal femur fractures.

    PubMed

    Botser, Itamar Busheri; Herman, Amir; Nathaniel, Ram; Rappaport, Dan; Chechik, Aharon

    2009-01-01

    Today most emergency room radiographs are computerized, making digital image enhancement a natural advancement to improve fracture diagnosis. We compared the diagnosis of nondisplaced proximal femur fractures using four different image enhancement methods using standard DICOM (Digital Imaging and Communications in Medicine) after window-leveling optimization. Twenty-nine orthopaedic residents and specialists reviewed 28 pelvic images consisting of 25 occult proximal femur fractures and three images with no fracture, using four different image filters and the original DICOM image. For intertrochanteric fractures, the Retinex filter outperforms the other filters and the original image with a correct fracture type diagnosis rate of 50.6%. The Retinex filter also performs well for diagnosis of other fracture types. The Retinex filter had an interobserver agreement index of 53.5%, higher than the other filters. Sensitivity of fracture diagnosis increased to 85.2% when the Retinex filter was combined with the standard DICOM image. Correct fracture type diagnosis per minute for the Retinex filter was 1.43, outperforming the other filters. The Retinex filter may become a valuable tool in clinical settings for diagnosing fractures. Level I, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence.

  12. Digital Image Enhancement Improves Diagnosis of Nondisplaced Proximal Femur Fractures

    PubMed Central

    Herman, Amir; Nathaniel, Ram; Rappaport, Dan; Chechik, Aharon

    2008-01-01

    Today most emergency room radiographs are computerized, making digital image enhancement a natural advancement to improve fracture diagnosis. We compared the diagnosis of nondisplaced proximal femur fractures using four different image enhancement methods using standard DICOM (Digital Imaging and Communications in Medicine) after window-leveling optimization. Twenty-nine orthopaedic residents and specialists reviewed 28 pelvic images consisting of 25 occult proximal femur fractures and three images with no fracture, using four different image filters and the original DICOM image. For intertrochanteric fractures, the Retinex filter outperforms the other filters and the original image with a correct fracture type diagnosis rate of 50.6%. The Retinex filter also performs well for diagnosis of other fracture types. The Retinex filter had an interobserver agreement index of 53.5%, higher than the other filters. Sensitivity of fracture diagnosis increased to 85.2% when the Retinex filter was combined with the standard DICOM image. Correct fracture type diagnosis per minute for the Retinex filter was 1.43, outperforming the other filters. The Retinex filter may become a valuable tool in clinical settings for diagnosing fractures. Level of Evidence: Level I, diagnostic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:18791776

  13. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.

    PubMed

    Mastanduno, Michael A; Gambhir, Sanjiv S

    2016-10-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.

  14. Improved target detection by IR dual-band image fusion

    NASA Astrophysics Data System (ADS)

    Adomeit, U.; Ebert, R.

    2009-09-01

    Dual-band thermal imagers acquire information simultaneously in both the 8-12 μm (long-wave infrared, LWIR) and the 3-5 μm (mid-wave infrared, MWIR) spectral range. Compared to single-band thermal imagers they are expected to have several advantages in military applications. These advantages include the opportunity to use the best band for given atmospheric conditions (e. g. cold climate: LWIR, hot and humid climate: MWIR), the potential to better detect camouflaged targets and an improved discrimination between targets and decoys. Most of these advantages have not yet been verified and/or quantified. It is expected that image fusion allows better exploitation of the information content available with dual-band imagers especially with respect to detection of targets. We have developed a method for dual-band image fusion based on the apparent temperature differences in the two bands. This method showed promising results in laboratory tests. In order to evaluate its performance under operational conditions we conducted a field trial in an area with high thermal clutter. In such areas, targets are hardly to detect in single-band images because they vanish in the clutter structure. The image data collected in this field trial was used for a perception experiment. This perception experiment showed an enhanced target detection range and reduced false alarm rate for the fused images compared to the single-band images.

  15. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures

    PubMed Central

    Mastanduno, Michael A.; Gambhir, Sanjiv S.

    2016-01-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies. PMID:27867695

  16. Automated Digital Image Analysis of Islet Cell Mass Using Nikon's Inverted Eclipse Ti Microscope and Software to Improve Engraftment may Help to Advance the Therapeutic Efficacy and Accessibility of Islet Transplantation across Centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method ( p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number ( r(2) = 0.91) and total islet number ( r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method ( p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method ( p < 0.001). The ADIA method also detected small islets between 10 and 50 μm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  17. Automated digital image analysis of islet cell mass using Nikon's inverted eclipse Ti microscope and software to improve engraftment may help to advance the therapeutic efficacy and accessibility of islet transplantation across centers.

    PubMed

    Gmyr, Valery; Bonner, Caroline; Lukowiak, Bruno; Pawlowski, Valerie; Dellaleau, Nathalie; Belaich, Sandrine; Aluka, Isanga; Moermann, Ericka; Thevenet, Julien; Ezzouaoui, Rimed; Queniat, Gurvan; Pattou, Francois; Kerr-Conte, Julie

    2015-01-01

    Reliable assessment of islet viability, mass, and purity must be met prior to transplanting an islet preparation into patients with type 1 diabetes. The standard method for quantifying human islet preparations is by direct microscopic analysis of dithizone-stained islet samples, but this technique may be susceptible to inter-/intraobserver variability, which may induce false positive/negative islet counts. Here we describe a simple, reliable, automated digital image analysis (ADIA) technique for accurately quantifying islets into total islet number, islet equivalent number (IEQ), and islet purity before islet transplantation. Islets were isolated and purified from n = 42 human pancreata according to the automated method of Ricordi et al. For each preparation, three islet samples were stained with dithizone and expressed as IEQ number. Islets were analyzed manually by microscopy or automatically quantified using Nikon's inverted Eclipse Ti microscope with built-in NIS-Elements Advanced Research (AR) software. The AIDA method significantly enhanced the number of islet preparations eligible for engraftment compared to the standard manual method (p < 0.001). Comparisons of individual methods showed good correlations between mean values of IEQ number (r(2) = 0.91) and total islet number (r(2) = 0.88) and thus increased to r(2) = 0.93 when islet surface area was estimated comparatively with IEQ number. The ADIA method showed very high intraobserver reproducibility compared to the standard manual method (p < 0.001). However, islet purity was routinely estimated as significantly higher with the manual method versus the ADIA method (p < 0.001). The ADIA method also detected small islets between 10 and 50 µm in size. Automated digital image analysis utilizing the Nikon Instruments software is an unbiased, simple, and reliable teaching tool to comprehensively assess the individual size of each islet cell preparation prior to transplantation. Implementation of this

  18. Encrypting 2D/3D image using improved lensless integral imaging in Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Wang, Qiong-Hua; Kim, Seok-Tae; Lee, In-Kwon

    2016-12-01

    We propose a new image encryption technique, for the first time to our knowledge, combined Fresnel transform with the improved lensless integral imaging technique. In this work, before image encryption, the input image is first recorded into an elemental image array (EIA) by using the improved lensless integral imaging technique. The recorded EIA is encrypted into random noise by use of two phase masks located in the Fresnel domain. The positions of phase masks and operation wavelength, as well as the integral imaging system parameters are used as encryption keys that can ensure security. Compared with previous works, the main novelty of this proposed method resides in the fact that the elemental images possess distributed memory characteristic, which greatly improved the robustness of the image encryption algorithm. Meanwhile, the proposed pixel averaging algorithm can effectively address the overlapping problem existing in the computational integral imaging reconstruction process. Numerical simulations are presented to demonstrate the feasibility and effectiveness of the proposed method. Results also indicate the high robustness against data loss attacks.

  19. A Mathematical Framework for Image Analysis

    DTIC Science & Technology

    1991-08-01

    The results reported here were derived from the research project ’A Mathematical Framework for Image Analysis ’ supported by the Office of Naval...Research, contract N00014-88-K-0289 to Brown University. A common theme for the work reported is the use of probabilistic methods for problems in image ... analysis and image reconstruction. Five areas of research are described: rigid body recognition using a decision tree/combinatorial approach; nonrigid

  20. A synthetic luciferin improves bioluminescence imaging in live mice

    PubMed Central

    Evans, Melanie S.; Chaurette, Joanna P.; Adams, Spencer T.; Reddy, Gadarla R.; Paley, Miranda A.; Aronin, Neil; Prescher, Jennifer A.; Miller, Stephen C.

    2014-01-01

    Firefly luciferase is the most widely used optical reporter for noninvasive bioluminescence imaging (BLI) in rodents. BLI relies on the ability of the injected luciferase substrate D-luciferin to access luciferase-expressing cells and tissues within the animal. Here we show that injection of mice with a synthetic luciferin, CycLuc1, improves BLI from existing luciferase reporters and enables imaging in the brain that could not be achieved with D-luciferin. PMID:24509630

  1. A synthetic luciferin improves bioluminescence imaging in live mice.

    PubMed

    Evans, Melanie S; Chaurette, Joanna P; Adams, Spencer T; Reddy, Gadarla R; Paley, Miranda A; Aronin, Neil; Prescher, Jennifer A; Miller, Stephen C

    2014-04-01

    Firefly luciferase is the most widely used optical reporter for noninvasive bioluminescence imaging (BLI) in rodents. BLI relies on the ability of the injected luciferase substrate D-luciferin to access luciferase-expressing cells and tissues within the animal. Here we show that injection of mice with a synthetic luciferin, CycLuc1, improves BLI with existing luciferase reporters and enables imaging in the brain that could not be achieved with D-luciferin.

  2. An improved balloon snake for HIFU image-guided system.

    PubMed

    Li, Zhong-Bing; Xu, Xian-Ze; Le, Yi; Xu, Feng-Qiu

    2014-07-01

    Target segmentation in ultrasound images is a key step in the definition of the intro-operative planning of high-intensity focused ultrasound therapy. This paper presents an improvement for the balloon snake in segmentation. A sign function, designed by the edge map and the moving snake, is added to give the direction of the balloon force on the moving snake separately. Segmentation results are demonstrated on ultrasound images and the effectiveness and convenience shown in applications.

  3. Individualized image display improves performance in laparoscopic surgery.

    PubMed

    Thakkar, Rajan K; Steigman, Shaun A; Aidlen, Jeremy T; Luks, François I

    2012-12-01

    Laparoscopic surgery has made great advances over the years, but it is still dependent on a single viewpoint. This single-lens system impedes multitasking and may provide suboptimal views of the operative field. We have previously developed a prototype of interactive laparoscopic image display to enable individualized manipulation of the displayed image by each member of the operating team. The current study examines whether the concept of individualized image display improves performance during laparoscopic surgery. Individualized display of the endoscopic image was implemented in vitro using two cameras, independently manipulated by each operator, in a Fundamental of Laparoscopic Surgery (Society of American Gastrointestinal and Endoscopic Surgeons) endotrainer model. The standardized bead transfer and endoloop tasks were adapted to a two-operator exercise. Each team of two was paired by experience level (novice or expert) and was timed twice: once while using a single camera (control) and once using two cameras (individualized image). In total, 20 medical students, residents, and attending surgeons were paired in various combinations. Bead transfer times for the individualized image experiment were significantly shorter in the expert group (61.8 ± 14.8% of control, P=.002). Endoloop task performance time was significantly decreased in both novices (80.3 ± 44.4%, P=.04) and experts (69.5 ± 12.9%, P=.001) using the two-camera set-up. Many advances in laparoscopic image display have led to an incremental improvement in performance. They have been most beneficial to novices, as experts have learned to overcome the shortcomings of laparoscopy. Using a validated tool of laparoscopic training, we have shown that efficiency is improved with the use of an individualized image display and that this effect is more pronounced in experts. The concept of individual image manipulation and display will be further developed into a hands-free, intuitive system and must be

  4. "Multimodal Contrast" from the Multivariate Analysis of Hyperspectral CARS Images

    NASA Astrophysics Data System (ADS)

    Tabarangao, Joel T.

    The typical contrast mechanism employed in multimodal CARS microscopy involves the use of other nonlinear imaging modalities such as two-photon excitation fluorescence (TPEF) microscopy and second harmonic generation (SHG) microscopy to produce a molecule-specific pseudocolor image. In this work, I explore the use of unsupervised multivariate statistical analysis tools such as Principal Component Analysis (PCA) and Vertex Component Analysis (VCA) to provide better contrast using the hyperspectral CARS data alone. Using simulated CARS images, I investigate the effects of the quadratic dependence of CARS signal on concentration on the pixel clustering and classification and I find that a normalization step is necessary to improve pixel color assignment. Using an atherosclerotic rabbit aorta test image, I show that the VCA algorithm provides pseudocolor contrast that is comparable to multimodal imaging, thus showing that much of the information gleaned from a multimodal approach can be sufficiently extracted from the CARS hyperspectral stack itself.

  5. The New and Improved Imager on GOES-R

    NASA Astrophysics Data System (ADS)

    Gunshor, M.; Schmit, T. J.; Otkin, J.; Bah, K.

    2012-12-01

    The next generation geostationary satellite series will offer a continuation of current products and services and enable improved and new capabilities. This will not only provide continuity of services, but allow many new uses of the imager information. The Advanced Baseline Imager (ABI) on the Geostationary Operational Environmental Satellites (GOES)-R series will address a wide range of phenomena. As with the current GOES Imager, the ABI will be used for a wide range of operational uses. The ABI will improve upon the current GOES Imager with more spectral bands, faster imaging, finer spatial resolutions, better navigation, and more accurate calibration. The ABI expands from five spectral bands on the current GOES imagers to a total of 16 spectral bands in the visible, near-infrared and infrared spectral regions. There will be an increase of the coverage rate leading to full disk scans to at least every 15 minutes and continental US (CONUS) scans every 5 minutes. ABI spatial resolution will be nominally 2 km for the infrared bands and 0.5 km for the 0.64 mm visible band. The improvements, products, and the challenges that have arisen in planning and preparing to use this new satellite system will be highlighted.

  6. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  7. Spectral identity mapping for enhanced chemical image analysis

    NASA Astrophysics Data System (ADS)

    Turner, John F., II

    2005-03-01

    Advances in spectral imaging instrumentation during the last two decades has lead to higher image fidelity, tighter spatial resolution, narrower spectral resolution, and improved signal to noise ratios. An important sub-classification of spectral imaging is chemical imaging, in which the sought-after information from the sample is its chemical composition. Consequently, chemical imaging can be thought of as a two-step process, spectral image acquisition and the subsequent processing of the spectral image data to generate chemically relevant image contrast. While chemical imaging systems that provide turnkey data acquisition are increasingly widespread, better strategies to analyze the vast datasets they produce are needed. The Generation of chemically relevant image contrast from spectral image data requires multivariate processing algorithms that can categorize spectra according to shape. Conventional chemometric techniques like inverse least squares, classical least squares, multiple linear regression, principle component regression, and multivariate curve resolution are effective for predicting the chemical composition of samples having known constituents, but are less effective when a priori information about the sample is unavailable. To address these problems, we have developed a fully automated non-parametric technique called spectral identity mapping (SIMS) that reduces the dependence of spectral image analysis on training datasets. The qualitative SIMS method provides enhanced spectral shape specificity and improved chemical image contrast. We present SIMS results of infrared spectral image data acquired from polymer coated paper substrates used in the manufacture of pressure sensitive adhesive tapes. In addition, we compare the SIMS results to results from spectral angle mapping (SAM) and cosine correlation analysis (CCA), two closely related techniques.

  8. Applications of process improvement techniques to improve workflow in abdominal imaging.

    PubMed

    Tamm, Eric Peter

    2016-03-01

    Major changes in the management and funding of healthcare are underway that will markedly change the way radiology studies will be reimbursed. The result will be the need to deliver radiology services in a highly efficient manner while maintaining quality. The science of process improvement provides a practical approach to improve the processes utilized in radiology. This article will address in a step-by-step manner how to implement process improvement techniques to improve workflow in abdominal imaging.

  9. A dual-view digital tomosynthesis imaging technique for improved chest imaging

    SciTech Connect

    Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng; Shaw, Chris C.

    2015-09-15

    Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of the reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD) values

  10. Spatial image modulation to improve performance of computed tomography imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Wilson, Daniel W. (Inventor); Johnson, William R. (Inventor)

    2010-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having patterns for imposing spatial structure are provided. The pattern may be imposed either directly on the object scene being imaged or at the field stop aperture. The use of the pattern improves the accuracy of the captured spatial and spectral information.

  11. Quantifying Image Quality Improvement Using Elevated Acoustic Output in B-Mode Harmonic Imaging.

    PubMed

    Deng, Yufeng; Palmeri, Mark L; Rouze, Ned C; Trahey, Gregg E; Haystead, Clare M; Nightingale, Kathryn R

    2017-10-01

    Tissue harmonic imaging has been widely used in abdominal imaging because of its significant reduction in acoustic noise compared with fundamental imaging. However, tissue harmonic imaging can be limited by both signal-to-noise ratio and penetration depth during clinical imaging, resulting in decreased diagnostic utility. A logical approach would be to increase the source pressure, but the in situ pressures used in diagnostic ultrasound are subject to a de facto upper limit based on the U.S. Food and Drug Administration guideline for the mechanical index (<1.9). A recent American Institute of Ultrasound in Medicine report concluded that an effective mechanical index ≤4.0 could be warranted without concern for increased risk of cavitation in non-fetal tissues without gas bodies, but would only be justified if there were a concurrent improvement in image quality and diagnostic utility. This work evaluates image quality differences between normal and elevated acoustic output hepatic harmonic imaging using a transmit frequency of 1.8 MHz. The results indicate that harmonic imaging using elevated acoustic output leads to modest improvements (3%-7%) in contrast-to-noise ratio of hypo-echoic hepatic vessels and increases in imaging penetration depth on the order of 4 mm per mechanical index increase of 0.1 for a given focal depth. Difficult-to-image patients who suffer from poor ultrasound image quality exhibited larger improvements than easy-to-image study participants. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Improvements to Color HRSC+OMEGA Image Mosaics of Mars

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Audouard, J.; Dumke, A.; Dunker, T.; Gross, C.; Kneissl, T.; Michael, G.; Ody, A.; Poulet, F.; Schreiner, B.; van Gasselt, S.; Walter, S. H. G.; Wendt, L.; Zuschneid, W.

    2015-10-01

    The High Resolution Stereo Camera (HRSC) on the Mars Express (MEx) orbiter has acquired 3640 images (with 'preliminary level 4' processing as described in [1]) of the Martian surface since arriving in orbit in 2003, covering over 90% of the planet [2]. At resolutions that can reach 10 meters/pixel, these MEx/HRSC images [3-4] are constructed in a pushbroom manner from 9 different CCD line sensors, including a panchromatic nadir-looking (Pan) channel, 4 color channels (R, G, B, IR), and 4 other panchromatic channels for stereo imaging or photometric imaging. In [5], we discussed our first approach towards mosaicking hundreds of the MEx/HRSC RGB or Pan images together. The images were acquired under different atmospheric conditions over the entire mission and under different observation/illumination geometries. Therefore, the main challenge that we have addressed is the color (or gray-scale) matching of these images, which have varying colors (or gray scales) due to the different observing conditions. Using this first approach, our best results for a semiglobal mosaic consist of adding a high-pass-filtered version of the HRSC mosaic to a low-pass-filtered version of the MEx/OMEGA [6] global mosaic. Herein, we will present our latest results using a new, improved, second approach for mosaicking MEx/HRSC images [7], but focusing on the RGB Color processing when using this new second approach. Currently, when the new second approach is applied to Pan images, we match local spatial averages of the Pan images to the local spatial averages of a mosaic made from the images acquired by the Mars Global Surveyor TES bolometer. Since these MGS/TES images have already been atmospherically-corrected, this matching allows us to bootstrap the process of mosaicking the HRSC images without actually atmospherically correcting the HRSC images. In this work, we will adapt this technique of MEx/HRSC Pan images being matched with the MGS/TES mosaic, so that instead, MEx/HRSC RGB images

  13. A watershed approach for improving medical image segmentation.

    PubMed

    Zanaty, E A; Afifi, Ashraf

    2013-01-01

    In this paper, a novel watershed approach based on seed region growing and image entropy is presented which could improve the medical image segmentation. The proposed algorithm enables the prior information of seed region growing and image entropy in its calculation. The algorithm starts by partitioning the image into several levels of intensity using watershed multi-degree immersion process. The levels of intensity are the input to a computationally efficient seed region segmentation process which produces the initial partitioning of the image regions. These regions are fed to entropy procedure to carry out a suitable merging which produces the final segmentation. The latter process uses a region-based similarity representation of the image regions to decide whether regions can be merged. The region is isolated from the level and the residual pixels are uploaded to the next level and so on, we recall this process as multi-level process and the watershed is called multi-level watershed. The proposed algorithm is applied to challenging applications: grey matter-white matter segmentation in magnetic resonance images (MRIs). The established methods and the proposed approach are experimented by these applications to a variety of simulating immersion, multi-degree, multi-level seed region growing and multi-level seed region growing with entropy. It is shown that the proposed method achieves more accurate results for medical image oversegmentation.

  14. Image analysis for denoising full-field frequency-domain fluorescence lifetime images.

    PubMed

    Spring, B Q; Clegg, R M

    2009-08-01

    Video-rate fluorescence lifetime-resolved imaging microscopy (FLIM) is a quantitative imaging technique for measuring dynamic processes in biological specimens. FLIM offers valuable information in addition to simple fluorescence intensity imaging; for instance, the fluorescence lifetime is sensitive to the microenvironment of the fluorophore allowing reliable differentiation between concentration differences and dynamic quenching. Homodyne FLIM is a full-field frequency-domain technique for imaging fluorescence lifetimes at every pixel of a fluorescence image simultaneously. If a single modulation frequency is used, video-rate image acquisition is possible. Homodyne FLIM uses a gain-modulated image intensified charge-coupled device (ICCD) detector, which unfortunately is a major contribution to the noise of the measurement. Here we introduce image analysis for denoising homodyne FLIM data. The denoising routine is fast, improves the extraction of the fluorescence lifetime value(s) and increases the sensitivity and fluorescence lifetime resolving power of the FLIM instrument. The spatial resolution (especially the high spatial frequencies not related to noise) of the FLIM image is preserved, because the denoising routine does not blur or smooth the image. By eliminating the random noise known to be specific to photon noise and from the intensifier amplification, the fidelity of the spatial resolution is improved. The polar plot projection, a rapid FLIM analysis method, is used to demonstrate the effectiveness of the denoising routine with exemplary data from both physical and complex biological samples. We also suggest broader impacts of the image analysis for other fluorescence microscopy techniques (e.g. super-resolution imaging).

  15. Temporal resolution improvement using PICCS in MDCT cardiac imaging

    PubMed Central

    Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang

    2009-01-01

    The current paradigm for temporal resolution improvement is to add more source-detector units and∕or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120°, which is roughly 50% of the standard short-scan angular range (∼240° for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution

  16. Improving Automated Annotation of Benthic Survey Images Using Wide-band Fluorescence

    PubMed Central

    Beijbom, Oscar; Treibitz, Tali; Kline, David I.; Eyal, Gal; Khen, Adi; Neal, Benjamin; Loya, Yossi; Mitchell, B. Greg; Kriegman, David

    2016-01-01

    Large-scale imaging techniques are used increasingly for ecological surveys. However, manual analysis can be prohibitively expensive, creating a bottleneck between collected images and desired data-products. This bottleneck is particularly severe for benthic surveys, where millions of images are obtained each year. Recent automated annotation methods may provide a solution, but reflectance images do not always contain sufficient information for adequate classification accuracy. In this work, the FluorIS, a low-cost modified consumer camera, was used to capture wide-band wide-field-of-view fluorescence images during a field deployment in Eilat, Israel. The fluorescence images were registered with standard reflectance images, and an automated annotation method based on convolutional neural networks was developed. Our results demonstrate a 22% reduction of classification error-rate when using both images types compared to only using reflectance images. The improvements were large, in particular, for coral reef genera Platygyra, Acropora and Millepora, where classification recall improved by 38%, 33%, and 41%, respectively. We conclude that convolutional neural networks can be used to combine reflectance and fluorescence imagery in order to significantly improve automated annotation accuracy and reduce the manual annotation bottleneck. PMID:27021133

  17. Merging Panchromatic and Multispectral Images for Enhanced Image Analysis

    DTIC Science & Technology

    1990-08-01

    Multispectral Images for Enhanced Image Analysis I, Curtis K. Munechika grant permission to the Wallace Memorial Library of the Rochester Institute of...0.0 ()0 (.0(%C’ trees 3. 5 2.5% 0.0%l 44. 1% 5 (.()0th ,crass .1 ().W 0.0% 0).0% 97. overall classification accuracy: 87.5%( T-able DlIb . Confusion

  18. Image Analysis in Plant Sciences: Publish Then Perish.

    PubMed

    Lobet, Guillaume

    2017-07-01

    Image analysis has become a powerful technique for most plant scientists. In recent years dozens of image analysis tools have been published in plant science journals. These tools cover the full spectrum of plant scales, from single cells to organs and canopies. However, the field of plant image analysis remains in its infancy. It still has to overcome important challenges, such as the lack of robust validation practices or the absence of long-term support. In this Opinion article, I: (i) present the current state of the field, based on data from the plant-image-analysis.org database; (ii) identify the challenges faced by its community; and (iii) propose workable ways of improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  20. Advanced imaging improves prediction of hemorrhage after stroke thrombolysis.

    PubMed

    Campbell, Bruce C V; Christensen, Søren; Parsons, Mark W; Churilov, Leonid; Desmond, Patricia M; Barber, P Alan; Butcher, Kenneth S; Levi, Christopher R; De Silva, Deidre A; Lansberg, Maarten G; Mlynash, Michael; Olivot, Jean-Marc; Straka, Matus; Bammer, Roland; Albers, Gregory W; Donnan, Geoffrey A; Davis, Stephen M

    2013-04-01

    Very low cerebral blood volume (VLCBV), diffusion, and hypoperfusion lesion volumes have been proposed as predictors of hemorrhagic transformation following stroke thrombolysis. We aimed to compare these parameters, validate VLCBV in an independent cohort using DEFUSE study data, and investigate the interaction of VLCBV with regional reperfusion. The EPITHET and DEFUSE studies obtained diffusion and perfusion magnetic resonance imaging (MRI) in patients 3 to 6 hours from onset of ischemic stroke. EPITHET randomized patients to tissue plasminogen activator (tPA) or placebo, and all DEFUSE patients received tPA. VLCBV was defined as cerebral blood volume<2.5th percentile of brain contralateral to the infarct. Parenchymal hematoma (PH) was defined using European Cooperative Acute Stroke Study criteria. Reperfusion was assessed using subacute perfusion MRI coregistered to baseline imaging. In DEFUSE, 69 patients were analyzed, including 9 who developed PH. The >2 ml VLCBV threshold defined in EPITHET predicted PH with 100% sensitivity, 72% specificity, 35% positive predictive value, and 100% negative predictive value. Pooling EPITHET and DEFUSE (163 patients, including 23 with PH), regression models using VLCBV (p<0.001) and tPA (p=0.02) predicted PH independent of clinical factors better than models using diffusion or time to maximum>8 seconds lesion volumes. Excluding VLCBV in regions without reperfusion improved specificity from 61 to 78% in the pooled analysis. VLCBV predicts PH after stroke thrombolysis and appears to be a more powerful predictor than baseline diffusion or hypoperfusion lesion volumes. Reperfusion of regions of VLCBV is strongly associated with post-thrombolysis PH. VLCBV may be clinically useful to identify patients at significant risk of hemorrhage following reperfusion. Copyright © 2012 American Neurological Association.

  1. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  2. Program for Analysis and Enhancement of Images

    NASA Technical Reports Server (NTRS)

    Lu, Yun-Chi

    1987-01-01

    Land Analysis System (LAS) is collection of image-analysis computer programs designed to manipulate and analyze multispectral image data. Provides user with functions ingesting various sensor data, radiometric and geometric corrections, image registration, training site selection, supervised and unsupervised classification, Fourier domain filtering, and image enhancement. Sufficiently modular and includes extensive library of subroutines to permit inclusion of new algorithmic programs. Commercial package International Mathematical & Statistical Library (IMSL) required for full implementation of LAS. Written in VAX FORTRAN 77, C, and Macro assembler for DEC VAX operating under VMS 4.0.

  3. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  4. An improved level set method for vertebra CT image segmentation

    PubMed Central

    2013-01-01

    Background Clinical diagnosis and therapy for the lumbar disc herniation requires accurate vertebra segmentation. The complex anatomical structure and the degenerative deformations of the vertebrae makes its segmentation challenging. Methods An improved level set method, namely edge- and region-based level set method (ERBLS), is proposed for vertebra CT images segmentation. By considering the gradient information and local region characteristics of images, the proposed model can efficiently segment images with intensity inhomogeneity and blurry or discontinuous boundaries. To reduce the dependency on manual initialization in many active contour models and for an automatic segmentation, a simple initialization method for the level set function is built, which utilizes the Otsu threshold. In addition, the need of the costly re-initialization procedure is completely eliminated. Results Experimental results on both synthetic and real images demonstrated that the proposed ERBLS model is very robust and efficient. Compared with the well-known local binary fitting (LBF) model, our method is much more computationally efficient and much less sensitive to the initial contour. The proposed method has also applied to 56 patient data sets and produced very promising results. Conclusions An improved level set method suitable for vertebra CT images segmentation is proposed. It has the flexibility of segmenting the vertebra CT images with blurry or discontinuous edges, internal inhomogeneity and no need of re-initialization. PMID:23714300

  5. SAR image segmentation using MSER and improved spectral clustering

    NASA Astrophysics Data System (ADS)

    Gui, Yang; Zhang, Xiaohu; Shang, Yang

    2012-12-01

    A novel approach is presented for synthetic aperture radar (SAR) image segmentation. By incorporating the advantages of maximally stable extremal regions (MSER) algorithm and spectral clustering (SC) method, the proposed approach provides effective and robust segmentation. First, the input image is transformed from a pixel-based to a region-based model by using the MSER algorithm. The input image after MSER procedure is composed of some disjoint regions. Then the regions are treated as nodes in the image plane, and a graph structure is applied to represent them. Finally, the improved SC is used to perform globally optimal clustering, by which the result of image segmentation can be generated. To avoid some incorrect partitioning when considering each region as one graph node, we assign different numbers of nodes to represent the regions according to area ratios among the regions. In addition, K-harmonic means instead of K-means is applied in the improved SC procedure in order to raise its stability and performance. Experimental results show that the proposed approach is effective on SAR image segmentation and has the advantage of calculating quickly.

  6. Resolution enhancement for ISAR imaging via improved statistical compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Wang, Hongxian; Qiao, Zhi-jun

    2016-12-01

    Developing compressed sensing (CS) theory reveals that optimal reconstruction of an unknown signal can be achieved from very limited observations by utilizing signal sparsity. For inverse synthetic aperture radar (ISAR), the image of an interesting target is generally constructed by limited strong scattering centers, representing strong spatial sparsity. Such prior sparsity intrinsically paves a way to improved ISAR imaging performance. In this paper, we develop a super-resolution algorithm for forming ISAR images from limited observations. When the amplitude of the target scattered field follows an identical Laplace probability distribution, the approach converts super-resolution imaging into sparsity-driven optimization in the Bayesian statistics sense. We show that improved performance is achievable by taking advantage of the meaningful spatial structure of the scattered field. Further, we use the nonidentical Laplace distribution with small scale on strong signal components and large scale on noise to discriminate strong scattering centers from noise. A maximum likelihood estimator combined with a bandwidth extrapolation technique is also developed to estimate the scale parameters. Real measured data processing indicates the proposal can reconstruct the high-resolution image though only limited pulses even with low SNR, which shows advantages over current super-resolution imaging methods.

  7. Improved restoration algorithm for weakly blurred and strongly noisy image

    NASA Astrophysics Data System (ADS)

    Liu, Qianshun; Xia, Guo; Zhou, Haiyang; Bai, Jian; Yu, Feihong

    2015-10-01

    In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.

  8. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  9. Implementation of a Quality Improvement Bundle Improves Echocardiographic Imaging after Congenital Heart Surgery in Children.

    PubMed

    Parthiban, Anitha; Levine, Jami C; Nathan, Meena; Marshall, Jennifer A; Shirali, Girish S; Simon, Stephen D; Colan, Steven D; Newburger, Jane W; Raghuveer, Geetha

    2016-12-01

    Postoperative echocardiography after congenital heart disease surgery is of prognostic importance, but variable image quality is problematic. We implemented a quality improvement bundle comprising of focused imaging protocols, procedural sedation, and sonographer education to improve the rate of optimal imaging (OI). Predischarge echocardiograms were evaluated in 116 children (median age, 0.51 years; range, 0.01-5.6 years) from two centers after tetralogy of Fallot repair, arterial switch operation, and bidirectional Glenn and Fontan procedures. OI rates were compared between the centers before and after the implementation of a quality improvement bundle at center 1, with center 2 serving as the comparator. Echocardiographic images were independently scored by a single reader from each center, blinded to center and time period. For each echocardiographic variable, quality score was assigned as 0 (not imaged or suboptimally imaged) or 1 (optimally imaged); structures were classified as intra- or extracardiac. The rate of OI was calculated for each variable as the percentage of patients assigned a score of 1. Intracardiac structures had higher OI than extracardiac structures (81% vs 57%; adjusted odds ratio [OR], 3.47; P < .01). Center 1 improved overall OI from 48% to 73% (OR, 4.44; P < .01), intracardiac OI from 69% to 85% (OR, 3.53; P = .01), and extracardiac OI from 35% to 67% (OR, 5.16; P < .01). There was no temporal difference for center 2. After congenital heart disease surgery in children, intracardiac structures are imaged more optimally than extracardiac structures. Focused imaging protocols, patient sedation, and sonographer education can improve OI rates. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.

  10. Cryptanalysis of a spatiotemporal chaotic image/video cryptosystem and its improved version

    NASA Astrophysics Data System (ADS)

    Ge, Xin; Liu, Fenlin; Lu, Bin; Wang, Wei

    2011-01-01

    Recently, a spatiotemporal chaotic image/video cryptosystem was proposed by Lian. Shortly after its publication, Rhouma et al. proposed two attacks on the cryptosystem. They as well introduced an improved cryptosystem which is more secured under attacks (R. Rhouma, S. Belghith, Phys. Lett. A 372 (2008) 5790) [29]. This Letter re-examines securities of Lian's cryptosystem and its improved version, by showing that not all details of the ciphered image of Lian's cryptosystem can be recovered by Rhouma et al.'s attacks due to the incorrectly recovered part of the sign-bits of the AC coefficients with an inappropriately chosen image. As a result, modifications of Rhouma et al.'s attacks are proposed in order to recover the ciphered image of Lian's cryptosystem completely; then based on the modifications, two new attacks are proposed to break the improved version of Lian's cryptosystem. Finally, experimental results illustrate the validity of our analysis.

  11. Improved QD-BRET conjugates for detection and imaging

    SciTech Connect

    Xing Yun; So, Min-kyung; Koh, Ai Leen; Sinclair, Robert; Rao Jianghong

    2008-08-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dot bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness and photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holding the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell tracking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies.

  12. X-ray holographic microscopy: Improved images of zymogen granules

    SciTech Connect

    Jacobsen, C.; Howells, M.; Kirz, J.; McQuaid, K.; Rothman, S.

    1988-10-01

    Soft x-ray holography has long been considered as a technique for x-ray microscopy. It has been only recently, however, that sub-micron resolution has been obtained in x-ray holography. This paper will concentrate on recent progress we have made in obtaining reconstructed images of improved quality. 15 refs., 6 figs.

  13. Improved QD-BRET conjugates for detection and imaging

    PubMed Central

    Xing, Yun; So, Min-kyung; Koh, Ai-leen; Sinclair, Robert; Rao, Jianghong

    2008-01-01

    Self-illuminating quantum dots, also known as QD-BRET conjugates, are a new class of quantum dots bioconjugates which do not need external light for excitation. Instead, light emission relies on the bioluminescence resonance energy transfer from the attached Renilla luciferase enzyme, which emits light upon the oxidation of its substrate. QD-BRET combines the advantages of the QDs (such as superior brightness & photostability, tunable emission, multiplexing) as well as the high sensitivity of bioluminescence imaging, thus holds the promise for improved deep tissue in vivo imaging. Although studies have demonstrated the superior sensitivity and deep tissue imaging potential, the stability of the QD-BRET conjugates in biological environment needs to be improved for long-term imaging studies such as in vivo cell trafficking. In this study, we seek to improve the stability of QD-BRET probes through polymeric encapsulation with a polyacrylamide gel. Results show that encapsulation caused some activity loss, but significantly improved both the in vitro serum stability and in vivo stability when subcutaneously injected into the animal. Stable QD-BRET probes should further facilitate their applications for both in vitro testing as well as in vivo cell tracking studies. PMID:18468518

  14. Improving Black Student Achievement by Enhancing Students' Self-Image.

    ERIC Educational Resources Information Center

    Kuykendall, Crystal

    This guide to improving Black student achievement by enhancing self-image is the third part of a four-part series addressing the essential characteristics of effective instruction that have a positive impact on the academic achievement of Black and Hispanic students. A positive academic identity is crucial for underachieving Black students.…

  15. Analyzing training information from random forests for improved image segmentation.

    PubMed

    Mahapatra, Dwarikanath

    2014-04-01

    Labeled training data are used for challenging medical image segmentation problems to learn different characteristics of the relevant domain. In this paper, we examine random forest (RF) classifiers, their learned knowledge during training and ways to exploit it for improved image segmentation. Apart from learning discriminative features, RFs also quantify their importance in classification. Feature importance is used to design a feature selection strategy critical for high segmentation and classification accuracy, and also to design a smoothness cost in a second-order MRF framework for graph cut segmentation. The cost function combines the contribution of different image features like intensity, texture, and curvature information. Experimental results on medical images show that this strategy leads to better segmentation accuracy than conventional graph cut algorithms that use only intensity information in the smoothness cost.

  16. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  17. Adaptive feature enhancement for mammographic images with wavelet multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

    1997-10-01

    A novel and computationally efficient approach to an adaptive mammographic image feature enhancement using wavelet-based multiresolution analysis is presented. On wavelet decomposition applied to a given mammographic image, we integrate the information of the tree-structured zero crossings of wavelet coefficients and the information of the low-pass-filtered subimage to enhance the desired image features. A discrete wavelet transform with pyramidal structure is employed to speedup the computation for wavelet decomposition and reconstruction. The spatiofrequency localization property of the wavelet transform is exploited based on the spatial coherence of image and the principle of human psycho-visual mechanism. Preliminary results show that the proposed approach is able to adaptively enhance local edge features, suppress noise, and improve global visualization of mammographic image features. This wavelet- based multiresolution analysis is therefore promising for computerized mass screening of mammograms.

  18. Imaging flow cytometry for phytoplankton analysis.

    PubMed

    Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S

    2017-01-01

    This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments.

  19. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  20. Improving RLRN Image Splicing Detection with the Use of PCA and Kernel PCA

    PubMed Central

    Jalab, Hamid A.; Md Noor, Rafidah

    2014-01-01

    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images. PMID:25295304

  1. Improving RLRN image splicing detection with the Use of PCA and kernel PCA.

    PubMed

    Moghaddasi, Zahra; Jalab, Hamid A; Md Noor, Rafidah; Aghabozorgi, Saeed

    2014-01-01

    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.

  2. [Study of color blood image segmentation based on two-stage-improved FCM algorithm].

    PubMed

    Wang, Bin; Chen, Huaiqing; Huang, Hua; Rao, Jie

    2006-04-01

    This paper introduces a new method for color blood cell image segmentation based on FCM algorithm. By transforming the original blood microscopic image to indexed image, and by doing the colormap, a fuzzy apparoach to obviating the direct clustering of image pixel values, the quantity of data processing and analysis is enormously compressed. In accordance to the inherent features of color blood cell image, the segmentation process is divided into two stages. (1)confirming the number of clusters and initial cluster centers; (2) altering the distance measuring method by the distance weighting matrix in order to improve the clustering veracity. In this way, the problem of difficult convergence of FCM algorithm is solved, the iteration time of iterative convergence is reduced, the execution time of algarithm is decreased, and the correct segmentation of the components of color blood cell image is implemented.

  3. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  4. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  5. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis.

    PubMed

    Vest, Joshua R; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B

    2015-12-01

    Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004-2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = -0.17; 95% confidence interval [CI] = [-0.25, -0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  6. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis

    PubMed Central

    Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.

    2016-01-01

    Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882

  7. Theory of Image Analysis and Recognition.

    DTIC Science & Technology

    1983-01-24

    Narendra Ahuja Image models Ramalingam Chellappa Image models Matti Pietikainen * Texture analysis b David G. Morgenthaler’ 3D digital geometry c Angela Y. Wu...Restoration Parameter Choice A Quantitative Guide," TR-965, October 1980. 70. Matti Pietikainen , "On the Use of Hierarchically Computed ’Mexican Hat...81. Matti Pietikainen and Azriel Rosenfeld, "Image Segmenta- tion by Texture Using Pyramid Node Linking," TR-1008, February 1981. 82. David G. 1

  8. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    NASA Astrophysics Data System (ADS)

    Masullo, Alessandro; Theunissen, Raf

    2017-07-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector.

  9. Analysis of dynamic brain imaging data.

    PubMed Central

    Mitra, P P; Pesaran, B

    1999-01-01

    Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: "noise" characterization and suppression, and "signal" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources. PMID:9929474

  10. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d+ = 0.38), a small-to-medium reduction in beauty ideal internalisation (d+ = -0.37), and a large reduction in social comparison tendencies (d+ = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  11. Improved space object detection via scintillated short exposure image data

    NASA Astrophysics Data System (ADS)

    Cain, Stephen

    2016-09-01

    Since telescopes were first trained on the heavens, nearly every attempt to locate new objects in the sky has been accomplished using long-exposure images (images taken with exposures longer than one second). For many decades astronomers have utilized short-exposure images to achieve high spatial resolution and mitigate the effect of Earth's atmosphere. This is due to the fact that short-exposure images contain higher spatial frequency content than their long exposure counterparts. New objects, such as dim companions of binary stars have been revealed using these techniques, thus achieving some discoveries via short-exposure images, but always in context of a detailed analysis of a specific area of the sky, not through a synoptic search of the heavens like the kind carried out by asteroid research programs like Space Watch, PAN-STARRS or the Catalina Sky Survey. In this paper simulated short-exposure images are processed to detect simulated stars in the presence of strong atmospheric turbulence. The strong turbulence condition produces long-exposures with wide star patterns, while the short-exposure images feature better concentration of the available photons, but large random position variance. A new processing strategy involving a matched-filter technique using a short-exposure atmospheric impulse response model is utilized. The results from this process are compared to efforts to detect stars in long exposure images taken over the same interval the short-exposure images are collected. The two techniques are compared and the short-exposure imaging technique is found to produce the higher probability of detection.

  12. The Hinode/XRT Full-Sun Image Corrections and the Improved Synoptic Composite Image Archive

    NASA Astrophysics Data System (ADS)

    Takeda, Aki; Yoshimura, Keiji; Saar, Steven H.

    2016-01-01

    The XRT Synoptic Composite Image Archive (SCIA) is a storage and gallery of X-ray full-Sun images obtained through the synoptic program of the X-Ray Telescope (XRT) onboard the Hinode satellite. The archived images provide a quick history of solar activity through the daily and monthly layout pages and long-term data for morphological and quantitative studies of the X-ray corona. This article serves as an introduction to the SCIA, i.e., to the structure of the archive and specification of the data products included therein. We also describe a number of techniques used to improve the quality of the archived images: preparation of composite images to increase intensity dynamic range, removal of dark spots that are due to contaminants on the CCD, and correction of the visible stray light contamination that has been detected on the Ti-poly and C-poly filter images since May 2012.

  13. Image guidance improves localization of sonographically occult colorectal liver metastases

    NASA Astrophysics Data System (ADS)

    Leung, Universe; Simpson, Amber L.; Adams, Lauryn B.; Jarnagin, William R.; Miga, Michael I.; Kingham, T. Peter

    2015-03-01

    Assessing the therapeutic benefit of surgical navigation systems is a challenging problem in image-guided surgery. The exact clinical indications for patients that may benefit from these systems is not always clear, particularly for abdominal surgery where image-guidance systems have failed to take hold in the same way as orthopedic and neurosurgical applications. We report interim analysis of a prospective clinical trial for localizing small colorectal liver metastases using the Explorer system (Path Finder Technologies, Nashville, TN). Colorectal liver metastases are small lesions that can be difficult to identify with conventional intraoperative ultrasound due to echogeneity changes in the liver as a result of chemotherapy and other preoperative treatments. Interim analysis with eighteen patients shows that 9 of 15 (60%) of these occult lesions could be detected with image guidance. Image guidance changed intraoperative management in 3 (17%) cases. These results suggest that image guidance is a promising tool for localization of small occult liver metastases and that the indications for image-guided surgery are expanding.

  14. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  15. Improving image accuracy of region-of-interest in cone-beam CT using prior image.

    PubMed

    Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong

    2014-03-06

    In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image.

  16. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  17. Research on an Improved Medical Image Enhancement Algorithm Based on P-M Model.

    PubMed

    Dong, Beibei; Yang, Jingjing; Hao, Shangfu; Zhang, Xiao

    2015-01-01

    Image enhancement can improve the detail of the image and so as to achieve the purpose of the identification of the image. At present, the image enhancement is widely used in medical images, which can help doctor's diagnosis. IEABPM (Image Enhancement Algorithm Based on P-M Model) is one of the most common image enhancement algorithms. However, it may cause the lost of the texture details and other features. To solve the problems, this paper proposes an IIEABPM (Improved Image Enhancement Algorithm Based on P-M Model). Simulation demonstrates that IIEABPM can effectively solve the problems of IEABPM, and improve image clarity, image contrast, and image brightness.

  18. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  19. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  20. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    PubMed Central

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms. PMID:22778618

  1. Comparison and improvement of color-based image retrieval techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Yujin; Liu, Zhong W.; He, Yun

    1997-12-01

    With the increasing popularity of image manipulation with contents, many color-based image retrieval techniques have been proposed in the literature. A systematic and comparative study of 8 representative techniques is first presented in this paper, which uses a database of 200 images of flags and trademarks. These techniques are determined to cover the variations of the color models used, of the characteristic color features employed and of the distance measures calculated for judging the similarity of color images. The results of this comparative study are presented both by the list of retrieved images for subjective visual inspection and by the retrieving ratios computed for objective judgement. All of them show that the cumulative histogram based techniques using Euclidean distance measures in two perception related color spaces give best results among the 8 techniques under consideration. Started from the best performed techniques, works toward further improving their retrieving capability are then carried on and this has resulted 2 new techniques which use local cumulative histograms. The new techniques have been tested by using a database of 400 images of real flowers which are quite complicated in color contents. Some satisfactory results, compared to that obtained by using existing cumulative histogram based techniques are obtained and presented.

  2. Improved image fusion method based on NSCT and accelerated NMF.

    PubMed

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  3. The Image Transceiver Device: Studies of Improved Physical Design

    PubMed Central

    David, Yitzhak; Efron, Uzi

    2008-01-01

    The Image Transceiver Device (ITD) design is based on combining LCOS micro-display, image processing tools and back illuminated APS imager in single CMOS chip [1]. The device is under development for Head-Mounted Display applications in augmented and virtual reality systems. The main issues with the present design are a high crosstalk of the backside imager and the need to shield the pixel circuitry from the photo- charges generated in the silicon substrate. In this publication we present a modified, “deep p-well” ITD pixel design, which provides a significantly reduced crosstalk level, as well as an effective shielding of photo-charges for the pixel circuitry. The simulation performed using Silvaco software [ATLAS Silicon Device Simulator, Ray Trace and Light Absorption programs, Silvaco International, 1998] shows that the new approach provides high photo response and allows increasing the optimal thickness of the die over and above the 10-15 micrometers commonly used for back illuminated imaging devices, thereby improving its mechanical ruggedness following the thinning process and also providing a more efficient absorption of the long wavelength photons. The proposed deep p-well pixel structure is also a technology solution for the fabrication of high performance back illuminated CMOS image sensors. PMID:27879940

  4. The Image Transceiver Device: Studies of Improved Physical Design.

    PubMed

    David, Yitzhak; Efron, Uzi

    2008-07-25

    The Image Transceiver Device (ITD) design is based on combining LCOS micro-display, image processing tools and back illuminated APS imager in single CMOS chip [1]. The device is under development for Head-Mounted Display applications in augmented and virtual reality systems. The main issues with the present design are a high crosstalk of the backside imager and the need to shield the pixel circuitry from the photocharges generated in the silicon substrate. In this publication we present a modified, "deep p-well" ITD pixel design, which provides a significantly reduced crosstalk level, as well as an effective shielding of photo-charges for the pixel circuitry. The simulation performed using Silvaco software [ATLAS Silicon Device Simulator, Ray Trace and Light Absorption programs, Silvaco International, 1998] shows that the new approach provides high photo response and allows increasing the optimal thickness of the die over and above the 10-15 micrometers commonly used for back illuminated imaging devices, thereby improving its mechanical ruggedness following the thinning process and also providing a more efficient absorption of the long wavelength photons. The proposed deep p-well pixel structure is also a technology solution for the fabrication of high performance back illuminated CMOS image sensors.

  5. On image analysis in fractography (Methodological Notes)

    NASA Astrophysics Data System (ADS)

    Shtremel', M. A.

    2015-10-01

    As other spheres of image analysis, fractography has no universal method for information convolution. An effective characteristic of an image is found by analyzing the essence and origin of every class of objects. As follows from the geometric definition of a fractal curve, its projection onto any straight line covers a certain segment many times; therefore, neither a time series (one-valued function of time) nor an image (one-valued function of plane) can be a fractal. For applications, multidimensional multiscale characteristics of an image are necessary. "Full" wavelet series break the law of conservation of information.

  6. Retinal image analysis: concepts, applications and potential.

    PubMed

    Patton, Niall; Aslam, Tariq M; MacGillivray, Thomas; Deary, Ian J; Dhillon, Baljean; Eikelboom, Robert H; Yogesan, Kanagasingam; Constable, Ian J

    2006-01-01

    As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it is the only part of the human circulation that can be directly visualised non-invasively in vivo, readily photographed and subject to digital image analysis. Exciting developments in image processing relevant to ophthalmology over the past 15 years includes the progress being made towards developing automated diagnostic systems for conditions, such as diabetic retinopathy, age-related macular degeneration and retinopathy of prematurity. These diagnostic systems offer the potential to be used in large-scale screening programs, with the potential for significant resource savings, as well as being free from observer bias and fatigue. In addition, quantitative measurements of retinal vascular topography using digital image analysis from retinal photography have been used as research tools to better understand the relationship between the retinal microvasculature and cardiovascular disease. Furthermore, advances in electronic media transmission increase the relevance of using image processing in 'teleophthalmology' as an aid in clinical decision-making, with particular relevance to large rural-based communities. In this review, we outline the principles upon which retinal digital image analysis is based. We discuss current techniques used to automatically detect landmark features of the fundus, such as the optic disc, fovea and blood vessels. We review the use of image analysis in the automated diagnosis of pathology (with particular reference to diabetic retinopathy). We also review its role in defining and performing quantitative measurements of vascular topography

  7. Multiresolution morphological analysis of document images

    NASA Astrophysics Data System (ADS)

    Bloomberg, Dan S.

    1992-11-01

    An image-based approach to document image analysis is presented, that uses shape and textural properties interchangeably at multiple scales. Image-based techniques permit a relatively small number of simple and fast operations to be used for a wide variety of analysis problems with document images. The primary binary image operations are morphological and multiresolution. The generalized opening, a morphological operation, allows extraction of image features that have both shape and textural properties, and that are not limited by properties related to image connectivity. Reduction operations are necessary due to the large number of pixels at scanning resolution, and threshold reduction is used for efficient and controllable shape and texture transformations between resolution levels. Aspects of these techniques, which include sequences of threshold reductions, are illustrated by problems such as text/halftone segmentation and word-level extraction. Both the generalized opening and these multiresolution operations are then used to identify italic and bold words in text. These operations are performed without any attempt at identification of individual characters. Their robustness derives from the aggregation of statistical properties over entire words. However, the analysis of the statistical properties is performed implicitly, in large part through nonlinear image processing operations. The approximate computational cost of the basic operations is given, and the importance of operating at the lowest feasable resolution is demonstrated.

  8. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  9. Edge enhanced morphology for infrared image analysis

    NASA Astrophysics Data System (ADS)

    Bai, Xiangzhi; Liu, Haonan

    2017-01-01

    Edge information is one of the critical information for infrared images. Morphological operators have been widely used for infrared image analysis. However, the edge information in infrared image is weak and the morphological operators could not well utilize the edge information of infrared images. To strengthen the edge information in morphological operators, the edge enhanced morphology is proposed in this paper. Firstly, the edge enhanced dilation and erosion operators are given and analyzed. Secondly, the pseudo operators which are derived from the edge enhanced dilation and erosion operators are defined. Finally, the applications for infrared image analysis are shown to verify the effectiveness of the proposed edge enhanced morphological operators. The proposed edge enhanced morphological operators are useful for the applications related to edge features, which could be extended to wide area of applications.

  10. Image Analysis of the Tumor Microenvironment.

    PubMed

    Lloyd, Mark C; Johnson, Joseph O; Kasprzak, Agnieszka; Bui, Marilyn M

    2016-01-01

    In the field of pathology it is clear that molecular genomics and digital imaging represent two promising future directions, and both are as relevant to the tumor microenvironment as they are to the tumor itself (Beck AH et al. Sci Transl Med 3(108):108ra113-08ra113, 2011). Digital imaging, or whole slide imaging (WSI), of glass histology slides facilitates a number of value-added competencies which were not previously possible with the traditional analog review of these slides under a microscope by a pathologist. As an important tool for investigational research, digital pathology can leverage the quantification and reproducibility offered by image analysis to add value to the pathology field. This chapter will focus on the application of image analysis to investigate the tumor microenvironment and how quantitative investigation can provide deeper insight into our understanding of the tumor to tumor microenvironment relationship.

  11. Improved diagonal queue medical image steganography using Chaos theory, LFSR, and Rabin cryptosystem.

    PubMed

    Jain, Mamta; Kumar, Anil; Choudhary, Rishabh Charan

    2017-06-01

    In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.

  12. Topological image texture analysis for quality assessment

    NASA Astrophysics Data System (ADS)

    Asaad, Aras T.; Rashid, Rasber Dh.; Jassim, Sabah A.

    2017-05-01

    Image quality is a major factor influencing pattern recognition accuracy and help detect image tampering for forensics. We are concerned with investigating topological image texture analysis techniques to assess different type of degradation. We use Local Binary Pattern (LBP) as a texture feature descriptor. For any image construct simplicial complexes for selected groups of uniform LBP bins and calculate persistent homology invariants (e.g. number of connected components). We investigated image quality discriminating characteristics of these simplicial complexes by computing these models for a large dataset of face images that are affected by the presence of shadows as a result of variation in illumination conditions. Our tests demonstrate that for specific uniform LBP patterns, the number of connected component not only distinguish between different levels of shadow effects but also help detect the infected regions as well.

  13. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  14. Improving color constancy using indoor-outdoor image classification.

    PubMed

    Bianco, Simone; Ciocca, Gianluigi; Cusano, Claudio; Schettini, Raimondo

    2008-12-01

    In this work, we investigate how illuminant estimation techniques can be improved, taking into account automatically extracted information about the content of the images. We considered indoor/outdoor classification because the images of these classes present different content and are usually taken under different illumination conditions. We have designed different strategies for the selection and the tuning of the most appropriate algorithm (or combination of algorithms) for each class. We also considered the adoption of an uncertainty class which corresponds to the images where the indoor/outdoor classifier is not confident enough. The illuminant estimation algorithms considered here are derived from the framework recently proposed by Van de Weijer and Gevers. We present a procedure to automatically tune the algorithms' parameters. We have tested the proposed strategies on a suitable subset of the widely used Funt and Ciurea dataset. Experimental results clearly demonstrate that classification based strategies outperform general purpose algorithms.

  15. Analysis of the relationship between the volumetric soil moisture content and the NDVI from high resolution multi-spectral images for definition of vineyard management zones to improve irrigation

    NASA Astrophysics Data System (ADS)

    Martínez-Casasnovas, J. A.; Ramos, M. C.

    2009-04-01

    As suggested by previous research in the field of precision viticulture, intra-field yield variability is dependent on the variation of soil properties, and in particular the soil moisture content. Since the mapping in detail of this soil property for precision viticulture applications is highly costly, the objective of the present research is to analyse its relationship with the normalised difference vegetation index from high resolution satellite images to the use it in the definition of vineyard zonal management. The final aim is to improve irrigation in commercial vineyard blocks for better management of inputs and to deliver a more homogeneous fruit to the winery. The study was carried out in a vineyard block located in Raimat (NE Spain, Costers del Segre Designation of Origin). This is a semi-arid area with continental Mediterranean climate and a total annual precipitation between 300-400 mm. The vineyard block (4.5 ha) is planted with Syrah vines in a 3x2 m pattern. The vines are irrigated by means of drips under a partial root drying schedule. Initially, the irrigation sectors had a quadrangular distribution, with a size of about 1 ha each. Yield is highly variable within the block, presenting a coefficient of variation of 24.9%. For the measurement of the soil moisture content a regular sampling grid of 30 x 40 m was defined. This represents a sample density of 8 samples ha-1. At the nodes of the grid, TDR (Time Domain Reflectometer) probe tubes were permanently installed up to the 80 cm or up to reaching a contrasting layer. Multi-temporal measures were taken at different depths (each 20 cm) between November 2006 and December 2007. For each date, a map of the variability of the profile soil moisture content was interpolated by means of geostatistical analysis: from the measured values at the grid points the experimental variograms were computed and modelled and global block kriging (10 m squared blocks) undertaken with a grid spacing of 3 m x 3 m. On the

  16. Improved 3D cellular imaging by multispectral focus assessment

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  17. Improving the efficiency of image guided brachytherapy in cervical cancer

    PubMed Central

    Franklin, Adrian; Ajaz, Mazhar; Stewart, Alexandra

    2016-01-01

    Brachytherapy is an essential component of the treatment of locally advanced cervical cancers. It enables the dose to the tumor to be boosted whilst allowing relative sparing of the normal tissues. Traditionally, cervical brachytherapy was prescribed to point A but since the GEC-ESTRO guidelines were published in 2005, there has been a move towards prescribing the dose to a 3D volume. Image guided brachytherapy has been shown to reduce local recurrence, and improve survival and is optimally predicated on magnetic resonance imaging. Radiological studies, patient workflow, operative parameters, and intensive therapy planning can represent a challenge to clinical resources. This article explores the ways, in which 3D conformal brachytherapy can be implemented and draws findings from recent literature and a well-developed hospital practice in order to suggest ways to improve the efficiency and efficacy of a brachytherapy service. Finally, we discuss relatively underexploited translational research opportunities. PMID:28115963

  18. Improved image method for a holographic description of conical defects

    NASA Astrophysics Data System (ADS)

    Aref'eva, I. Ya.; Khramtsov, M. A.; Tikhanovskaya, M. D.

    2016-11-01

    The geodesics prescription in the holographic approach in the Lorentzian signature is applicable only for geodesics connecting spacelike-separated points at the boundary because there are no timelike geodesics that reach the boundary. Also, generally speaking, there is no direct analytic Euclidean continuation for a general background, such as a moving particle in the AdS space. We propose an improved geodesic image method for two-point Lorentzian correlators that is applicable for arbitrary time intervals when the space-time is deformed by point particles. We show that such a prescription agrees with the case where the analytic continuation exists and also with the previously used prescription for quasigeodesics. We also discuss some other applications of the improved image method: holographic entanglement entropy and multiple particles in the AdS3 space.

  19. Hybrid Expert Systems In Image Analysis

    NASA Astrophysics Data System (ADS)

    Dixon, Mark J.; Gregory, Paul J.

    1987-04-01

    Vision systems capable of inspecting industrial components and assemblies have a large potential market if they can be easily programmed and produced quickly. Currently, vision application software written in conventional high-level languages such as C or Pascal are produced by experts in program design, image analysis, and process control. Applications written this way are difficult to maintain and modify. Unless other similar inspection problems can be found, the final program is essentially one-off redundant code. A general-purpose vision system targeted for the Visual Machines Ltd. C-VAS 3000 image processing workstation, is described which will make writing image analysis software accessible to the non-expert both in programming computers and image analysis. A significant reduction in the effort required to produce vision systems, will be gained through a graphically-driven interactive application generator. Finally, an Expert System will be layered on top to guide the naive user through the process of generating an application.

  20. Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network

    NASA Astrophysics Data System (ADS)

    Kong, Weiwei; Liu, Jianping

    2013-01-01

    A new technique for image fusion based on nonsubsampled shearlet transform (NSST) and improved pulse-coupled neural network (PCNN) is proposed. NSST, as a novel multiscale geometric analysis tool, can be optimally efficient in representing images and capturing the geometric features of multidimensional data. As a result, NSST is introduced into the area of image fusion to complete the decompositions of source images in any scale and any direction. Then the basic PCNN model is improved to be improved PCNN (IPCNN), which is more concise and more effective. IPCNN adopts the contrast of each pixel in images as the linking strength β, and the time matrix T of subimages can be obtained via the synchronous pulse-burst property. By using IPCNN, the fused subimages can be achieved. Finally, the final fused image can be obtained by using inverse NSST. The numerical experiments demonstrate that the new technique presented in this paper is competitive in the field of image fusion in terms of both fusion performance and computational efficiency.

  1. Improved Starting Materials for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An improved type of starting materials for the fabrication of silicon-based imaging integrated circuits that include back-illuminated photodetectors has been conceived, and a process for making these starting materials is undergoing development. These materials are intended to enable reductions in dark currents and increases in quantum efficiencies, relative to those of comparable imagers made from prior silicon-on-insulator (SOI) starting materials. Some background information is prerequisite to a meaningful description of the improved starting materials and process. A prior SOI starting material, depicted in the upper part the figure, includes: a) A device layer on the front side, typically between 2 and 20 m thick, made of p-doped silicon (that is, silicon lightly doped with an electron acceptor, which is typically boron); b) A buried oxide (BOX) layer (that is, a buried layer of oxidized silicon) between 0.2 and 0.5 m thick; and c) A silicon handle layer (also known as a handle wafer) on the back side, between about 600 and 650 m thick. After fabrication of the imager circuitry in and on the device layer, the handle wafer is etched away, the BOX layer acting as an etch stop. In subsequent operation of the imager, light enters from the back, through the BOX layer. The advantages of back illumination over front illumination have been discussed in prior NASA Tech Briefs articles.

  2. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  3. Improvement of sidestream dark field imaging with an image acquisition stabilizer.

    PubMed

    Balestra, Gianmarco M; Bezemer, Rick; Boerma, E Christiaan; Yong, Ze-Yie; Sjauw, Krishan D; Engstrom, Annemarie E; Koopmans, Matty; Ince, Can

    2010-07-13

    In the present study we developed, evaluated in volunteers, and clinically validated an image acquisition stabilizer (IAS) for Sidestream Dark Field (SDF) imaging. The IAS is a stainless steel sterilizable ring which fits around the SDF probe tip. The IAS creates adhesion to the imaged tissue by application of negative pressure. The effects of the IAS on the sublingual microcirculatory flow velocities, the force required to induce pressure artifacts (PA), the time to acquire a stable image, and the duration of stable imaging were assessed in healthy volunteers. To demonstrate the clinical applicability of the SDF setup in combination with the IAS, simultaneous bilateral sublingual imaging of the microcirculation were performed during a lung recruitment maneuver (LRM) in mechanically ventilated critically ill patients. One SDF device was operated handheld; the second was fitted with the IAS and held in position by a mechanic arm. Lateral drift, number of losses of image stability and duration of stable imaging of the two methods were compared. Five healthy volunteers were studied. The IAS did not affect microcirculatory flow velocities. A significantly greater force had to applied onto the tissue to induced PA with compared to without IAS (0.25 +/- 0.15 N without vs. 0.62 +/- 0.05 N with the IAS, p < 0.001). The IAS ensured an increased duration of a stable image sequence (8 +/- 2 s without vs. 42 +/- 8 s with the IAS, p < 0.001). The time required to obtain a stable image sequence was similar with and without the IAS. In eight mechanically ventilated patients undergoing a LRM the use of the IAS resulted in a significantly reduced image drifting and enabled the acquisition of significantly longer stable image sequences (24 +/- 5 s without vs. 67 +/- 14 s with the IAS, p = 0.006). The present study has validated the use of an IAS for improvement of SDF imaging by demonstrating that the IAS did not affect microcirculatory perfusion in the microscopic field of view

  4. An improved corner detection algorithm for image sequence

    NASA Astrophysics Data System (ADS)

    Yan, Minqi; Zhang, Bianlian; Guo, Min; Tian, Guangyuan; Liu, Feng; Huo, Zeng

    2014-11-01

    A SUSAN corner detection algorithm for a sequence of images is proposed in this paper, The correlation matching algorithm is treated for the coarse positioning of the detection area, after that, SUSAN corner detection is used to obtain interesting points of the target. The SUSAN corner detection has been improved. For the situation that the points of a small area are often detected as corner points incorrectly, the neighbor direction filter is applied to reduce the rate of mistakes. Experiment results show that the algorithm enhances the anti-noise performance, improve the accuracy of detection.

  5. Improvement of material decomposition and image quality in dual-energy radiography by reducing image noise

    NASA Astrophysics Data System (ADS)

    Lee, D.; Kim, Y.-s.; Choi, S.; Lee, H.; Choi, S.; Jo, B. D.; Jeon, P.-H.; Kim, H.; Kim, D.; Kim, H.; Kim, H.-J.

    2016-08-01

    Although digital radiography has been widely used for screening human anatomical structures in clinical situations, it has several limitations due to anatomical overlapping. To resolve this problem, dual-energy imaging techniques, which provide a method for decomposing overlying anatomical structures, have been suggested as alternative imaging techniques. Previous studies have reported several dual-energy techniques, each resulting in different image qualities. In this study, we compared three dual-energy techniques: simple log subtraction (SLS), simple smoothing of a high-energy image (SSH), and anti-correlated noise reduction (ACNR) with respect to material thickness quantification and image quality. To evaluate dual-energy radiography, we conducted Monte Carlo simulation and experimental phantom studies. The Geant 4 Application for Tomographic Emission (GATE) v 6.0 and tungsten anode spectral model using interpolation polynomials (TASMIP) codes were used for simulation studies and digital radiography, and human chest phantoms were used for experimental studies. The results of the simulation study showed improved image contrast-to-noise ratio (CNR) and coefficient of variation (COV) values and bone thickness estimation accuracy by applying the ACNR and SSH methods. Furthermore, the chest phantom images showed better image quality with the SSH and ACNR methods compared to the SLS method. In particular, the bone texture characteristics were well-described by applying the SSH and ACNR methods. In conclusion, the SSH and ACNR methods improved the accuracy of material quantification and image quality in dual-energy radiography compared to SLS. Our results can contribute to better diagnostic capabilities of dual-energy images and accurate material quantification in various clinical situations.

  6. Ultrasonic particle image velocimetry for improved flow gradient imaging: algorithms, methodology and validation.

    PubMed

    Niu, Lili; Qian, Ming; Wan, Kun; Yu, Wentao; Jin, Qiaofeng; Ling, Tao; Gao, Shen; Zheng, Hairong

    2010-04-07

    This paper presents a new algorithm for ultrasonic particle image velocimetry (Echo PIV) for improving the flow velocity measurement accuracy and efficiency in regions with high velocity gradients. The conventional Echo PIV algorithm has been modified by incorporating a multiple iterative algorithm, sub-pixel method, filter and interpolation method, and spurious vector elimination algorithm. The new algorithms' performance is assessed by analyzing simulated images with known displacements, and ultrasonic B-mode images of in vitro laminar pipe flow, rotational flow and in vivo rat carotid arterial flow. Results of the simulated images show that the new algorithm produces much smaller bias from the known displacements. For laminar flow, the new algorithm results in 1.1% deviation from the analytically derived value, and 8.8% for the conventional algorithm. The vector quality evaluation for the rotational flow imaging shows that the new algorithm produces better velocity vectors. For in vivo rat carotid arterial flow imaging, the results from the new algorithm deviate 6.6% from the Doppler-measured peak velocities averagely compared to 15% of that from the conventional algorithm. The new Echo PIV algorithm is able to effectively improve the measurement accuracy in imaging flow fields with high velocity gradients.

  7. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  8. Unsupervised texture image segmentation by improved neural network ART2

    NASA Technical Reports Server (NTRS)

    Wang, Zhiling; Labini, G. Sylos; Mugnuolo, R.; Desario, Marco

    1994-01-01

    We here propose a segmentation algorithm of texture image for a computer vision system on a space robot. An improved adaptive resonance theory (ART2) for analog input patterns is adapted to classify the image based on a set of texture image features extracted by a fast spatial gray level dependence method (SGLDM). The nonlinear thresholding functions in input layer of the neural network have been constructed by two parts: firstly, to reduce the effects of image noises on the features, a set of sigmoid functions is chosen depending on the types of the feature; secondly, to enhance the contrast of the features, we adopt fuzzy mapping functions. The cluster number in output layer can be increased by an autogrowing mechanism constantly when a new pattern happens. Experimental results and original or segmented pictures are shown, including the comparison between this approach and K-means algorithm. The system written in C language is performed on a SUN-4/330 sparc-station with an image board IT-150 and a CCD camera.

  9. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  10. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-05-07

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging.

  11. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-03-13

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.

  12. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  13. Knowledge based imaging for terrain analysis

    NASA Technical Reports Server (NTRS)

    Holben, Rick; Westrom, George; Rossman, David; Kurrasch, Ellie

    1992-01-01

    A planetary rover will have various vision based requirements for navigation, terrain characterization, and geological sample analysis. In this paper we describe a knowledge-based controller and sensor development system for terrain analysis. The sensor system consists of a laser ranger and a CCD camera. The controller, under the input of high-level commands, performs such functions as multisensor data gathering, data quality monitoring, and automatic extraction of sample images meeting various criteria. In addition to large scale terrain analysis, the system's ability to extract useful geological information from rock samples is illustrated. Image and data compression strategies are also discussed in light of the requirements of earth bound investigators.

  14. Body image change and improved eating self-regulation in a weight management intervention in women

    PubMed Central

    2011-01-01

    Background Successful weight management involves the regulation of eating behavior. However, the specific mechanisms underlying its successful regulation remain unclear. This study examined one potential mechanism by testing a model in which improved body image mediated the effects of obesity treatment on eating self-regulation. Further, this study explored the role of different body image components. Methods Participants were 239 overweight women (age: 37.6 ± 7.1 yr; BMI: 31.5 ± 4.1 kg/m2) engaged in a 12-month behavioral weight management program, which included a body image module. Self-reported measures were used to assess evaluative and investment body image, and eating behavior. Measurements occurred at baseline and at 12 months. Baseline-residualized scores were calculated to report change in the dependent variables. The model was tested using partial least squares analysis. Results The model explained 18-44% of the variance in the dependent variables. Treatment significantly improved both body image components, particularly by decreasing its investment component (f2 = .32 vs. f2 = .22). Eating behavior was positively predicted by investment body image change (p < .001) and to a lesser extent by evaluative body image (p < .05). Treatment had significant effects on 12-month eating behavior change, which were fully mediated by investment and partially mediated by evaluative body image (effect ratios: .68 and .22, respectively). Conclusions Results suggest that improving body image, particularly by reducing its salience in one's personal life, might play a role in enhancing eating self-regulation during weight control. Accordingly, future weight loss interventions could benefit from proactively addressing body image-related issues as part of their protocols. PMID:21767360

  15. Body image change and improved eating self-regulation in a weight management intervention in women.

    PubMed

    Carraça, Eliana V; Silva, Marlene N; Markland, David; Vieira, Paulo N; Minderico, Cláudia S; Sardinha, Luís B; Teixeira, Pedro J

    2011-07-18

    Successful weight management involves the regulation of eating behavior. However, the specific mechanisms underlying its successful regulation remain unclear. This study examined one potential mechanism by testing a model in which improved body image mediated the effects of obesity treatment on eating self-regulation. Further, this study explored the role of different body image components. Participants were 239 overweight women (age: 37.6 ± 7.1 yr; BMI: 31.5 ± 4.1 kg/m²) engaged in a 12-month behavioral weight management program, which included a body image module. Self-reported measures were used to assess evaluative and investment body image, and eating behavior. Measurements occurred at baseline and at 12 months. Baseline-residualized scores were calculated to report change in the dependent variables. The model was tested using partial least squares analysis. The model explained 18-44% of the variance in the dependent variables. Treatment significantly improved both body image components, particularly by decreasing its investment component (f² = .32 vs. f² = .22). Eating behavior was positively predicted by investment body image change (p < .001) and to a lesser extent by evaluative body image (p < .05). Treatment had significant effects on 12-month eating behavior change, which were fully mediated by investment and partially mediated by evaluative body image (effect ratios: .68 and .22, respectively). Results suggest that improving body image, particularly by reducing its salience in one's personal life, might play a role in enhancing eating self-regulation during weight control. Accordingly, future weight loss interventions could benefit from proactively addressing body image-related issues as part of their protocols.

  16. Rock fracture image acquisition and analysis

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zongpu, Jia; Chen, Liwan

    2007-12-01

    As a cooperation project between Sweden and China, this paper presents: rock fracture image acquisition and analysis. Rock fracture images are acquired by using UV light illumination and visible optical illumination. To present fracture network reasonable, we set up some models to characterize the network, based on the models, we used Best fit Ferret method to auto-determine fracture zone, then, through skeleton fractures to obtain endpoints, junctions, holes, particles, and branches. Based on the new parameters and a part of common parameters, the fracture network density, porosity, connectivity and complexities can be obtained, and the fracture network is characterized. In the following, we first present a basic consideration and basic parameters for fractures (Primary study of characteristics of rock fractures), then, set up a model for fracture network analysis (Fracture network analysis), consequently to use the model to analyze fracture network with different images (Two dimensional fracture network analysis based on slices), and finally give conclusions and suggestions.

  17. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  18. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  19. A new ultrasonic transducer for improved contrast nonlinear imaging.

    PubMed

    Bouakaz, Ayache; Cate, Folkert ten; de Jong, Nico

    2004-08-21

    Second harmonic imaging has provided significant improvement in contrast detection over fundamental imaging. This improvement is a result of a higher contrast-to-tissue ratio (CTR) achievable at the second harmonic frequency. Nevertheless, the differentiation between contrast and tissue at the second harmonic frequency is still in many situations cumbersome and contrast detection remains nowadays as one of the main challenges, especially in the capillaries. The reduced CTR is mainly caused by the generation of second harmonic energy from nonlinear propagation effects in tissue, which hence obscures the echoes from contrast bubbles. In a previous study, we demonstrated theoretically that the CTR increases with the harmonic number. Therefore the purpose of our study was to increase the CTR by selectively looking to the higher harmonic frequencies. In order to be able to receive these high frequency components (third up to the fifth harmonic), a new ultrasonic phased array transducer has been constructed. The main advantage of the new design is its wide frequency bandwidth. The new array transducer contains two different types of elements arranged in an interleaved pattern (odd and even elements). This design enables separate transmission and reception modes. The odd elements operate at 2.8 MHz and 80% bandwidth, whereas the even elements have a centre frequency of 900 kHz with a bandwidth of 50%. The probe is connected to a Vivid 5 system (GE-Vingmed) and proper software is developed for driving. The total bandwidth of such a transducer is estimated to be more than 150% which enables higher harmonic imaging at an adequate sensitivity and signal to noise ratio compared to standard medical array transducers. We describe in this paper the design and fabrication of the array transducer. Moreover its acoustic properties are measured and its performances for nonlinear contrast imaging are evaluated in vitro and in vivo. The preliminary results demonstrate the advantages of

  20. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  1. Single particle raster image analysis of diffusion.

    PubMed

    Longfils, M; Schuster, E; Lorén, N; Särkkä, A; Rudemo, M

    2017-04-01

    As a complement to the standard RICS method of analysing Raster Image Correlation Spectroscopy images with estimation of the image correlation function, we introduce the method SPRIA, Single Particle Raster Image Analysis. Here, we start by identifying individual particles and estimate the diffusion coefficient for each particle by a maximum likelihood method. Averaging over the particles gives a diffusion coefficient estimate for the whole image. In examples both with simulated and experimental data, we show that the new method gives accurate estimates. It also gives directly standard error estimates. The method should be possible to extend to study heterogeneous materials and systems of particles with varying diffusion coefficient, as demonstrated in a simple simulation example. A requirement for applying the SPRIA method is that the particle concentration is low enough so that we can identify the individual particles. We also describe a bootstrap method for estimating the standard error of standard RICS.

  2. Embedded signal approach to image texture reproduction analysis

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Baxter, Donald

    2014-01-01

    Since image processing aimed at reducing image noise can also remove important texture, standard methods for evaluating the capture and retention of image texture are currently being developed. Concurrently, the evolution of the intelligence and performance of camera noise-reduction (NR) algorithms poses a challenge for these protocols. Many NR algorithms are `content-aware', which can lead to different levels of NR being applied to various regions within the same digital image. We review the requirements for improved texture measurement. The challenge is to evaluate image signal (texture) content without having a test signal interfere with the processing of the natural scene. We describe an approach to texture reproduction analysis that uses embedded periodic test signals within image texture regions. We describe a target that uses natural image texture combined with a multi-frequency periodic signal. This low-amplitude signal region is embedded in the texture image. Two approaches for embedding periodic test signals in image texture are described. The stacked sine-wave method uses a single combined, or stacked, region with several frequency components. The second method uses a low-amplitude version of the IEC-61146-1 sine-wave multi-burst chart, combined with image texture. A 3x3 grid of smaller regions, each with a single frequency, constitutes the test target. Both methods were evaluated using a simulated digital camera capture-path that included detector noise and optical MTF, for a range of camera exposure/ISO settings. Two types of image texture were used with the method, natural grass and a computed `dead-leaves' region composed of random circles. The embedded-signal methods tested for accuracy with respect to image noise over a wide range of levels, and then further in an evaluation of an adaptive noise-reduction image processing.

  3. Image guided dose escalated prostate radiotherapy: still room to improve

    PubMed Central

    Martin, Jarad M; Bayley, Andrew; Bristow, Robert; Chung, Peter; Gospodarowicz, Mary; Menard, Cynthia; Milosevic, Michael; Rosewall, Tara; Warde, Padraig R; Catton, Charles N

    2009-01-01

    Background Prostate radiotherapy (RT) dose escalation has been reported to result in improved biochemical control at the cost of greater late toxicity. We report on the application of 79.8 Gy in 42 fractions of prostate image guided RT (IGRT). The primary objective was to assess 5-year biochemical control and potential prognostic factors by the Phoenix definition. Secondary endpoints included acute and late toxicity by the Radiotherapy Oncology Group (RTOG) scoring scales. Methods From October/2001 and June/2003, 259 men were treated with at least 2-years follow-up. 59 patients had low, 163 intermediate and 37 high risk disease. 43 had adjuvant hormonal therapy (HT), mostly for high- or multiple risk factor intermediate-risk disease (n = 25). They received either 3-dimensional conformal RT (3DCRT, n = 226) or intensity modulated RT (IMRT) including daily on-line IGRT with intraprostatic fiducial markers. Results Median follow-up was 67.8 months (range 24.4-84.7). There was no severe (grade 3-4) acute toxicity, and grade 2 acute gastrointestinal (GI) toxicity was unusual (10.1%). The 5-year incidence of grade 2-3 late GI and genitourinary (GU) toxicity was 13.7% and 12.1%, with corresponding grade 3 figures of 3.5% and 2.0% respectively. HT had an association with an increased risk of grade 2-3 late GI toxicity (11% v 21%, p = 0.018). Using the Phoenix definition for biochemical failure, the 5 year-bNED is 88.4%, 76.5% and 77.9% for low, intermediate and high risk patients respectively. On univariate analysis, T-category and Gleason grade correlated with Phoenix bNED (p = 0.006 and 0.039 respectively). Hormonal therapy was not a significant prognostic factor on uni- or multi-variate analysis. Men with positive prostate biopsies following RT had a lower chance of bNED at 5 years (34.4% v 64.3%; p = 0.147). Conclusion IGRT to 79.8 Gy results in favourable rates of late toxicity compared with published non-IGRT treated cohorts. Future avenues of investigation for

  4. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  5. Image Processing for Galaxy Ellipticity Analysis

    NASA Astrophysics Data System (ADS)

    Stankus, Paul

    2015-04-01

    Shape analysis of statistically large samples of galaxy images can be used to reveal the imprint of weak gravitational lensing by dark matter distributions. As new, large-scale surveys expand the potential catalog, galaxy shape analysis suffers the (coupled) problems of high noise and uncertainty in the prior morphology. We investigate a new image processing technique to help mitigate these problems, in which repeated auto-correlations and auto-convolutions are employed to push the true shape toward a universal (Gaussian) attractor while relatively suppressing uncorrelated pixel noise. The goal is reliable reconstruction of original image moments, independent of image shape. First test evaluations of the technique on small control samples will be presented, and future applicability discussed. Supported by the US-DOE.

  6. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  7. Functional data analysis in brain imaging studies.

    PubMed

    Tian, Tian Siva

    2010-01-01

    Functional data analysis (FDA) considers the continuity of the curves or functions, and is a topic of increasing interest in the statistics community. FDA is commonly applied to time-series and spatial-series studies. The development of functional brain imaging techniques in recent years made it possible to study the relationship between brain and mind over time. Consequently, an enormous amount of functional data is collected and needs to be analyzed. Functional techniques designed for these data are in strong demand. This paper discusses three statistically challenging problems utilizing FDA techniques in functional brain imaging analysis. These problems are dimension reduction (or feature extraction), spatial classification in functional magnetic resonance imaging studies, and the inverse problem in magneto-encephalography studies. The application of FDA to these issues is relatively new but has been shown to be considerably effective. Future efforts can further explore the potential of FDA in functional brain imaging studies.

  8. Image Restoration Based on Statistic PSF Modeling for Improving the Astrometry of Space Debris

    NASA Astrophysics Data System (ADS)

    Sun, Rongyu; Jia, Peng

    2017-04-01

    Space debris is a special kind of fast-moving, near-Earth objects, and it is also considered to be an interesting topic in time-domain astronomy. Optical survey is the main technique for observing space debris, which contributes much to the studies of space environment. However, due to the motion of registered objects, image degradation is critical in optical space debris observations, as it affects the efficiency of data reduction and lowers the precision of astrometry. Therefore, the image restoration in the form of deconvolution can be applied to improve the data quality and reduction accuracy. To promote the image processing and optimize the reduction, the image degradation across the field of view is modeled statistically with principal component analysis and the efficient mean point-spread function (PSF) is derived from raw images, which is further used in the image restoration. To test the efficiency and reliability, trial observations were made for both low-Earth orbital and high-Earth orbital objects. The positions of all targets were measured using our novel approach and compared with the reference positions. The performance of image restoration employing our estimated PSF was compared with several competitive approaches. The proposed image restoration outperformed the others so that the influence of image degradation was distinctly reduced, which resulted in a higher signal-to-noise ratio and more precise astrometric measurements.

  9. Missile placement analysis based on improved SURF feature matching algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  10. Prospective Evaluation of Multimodal Optical Imaging with Automated Image Analysis to Detect Oral Neoplasia In Vivo.

    PubMed

    Quang, Timothy; Tran, Emily Q; Schwarz, Richard A; Williams, Michelle D; Vigneswaran, Nadarajah; Gillenwater, Ann M; Richards-Kortum, Rebecca

    2017-10-01

    The 5-year survival rate for patients with oral cancer remains low, in part because diagnosis often occurs at a late stage. Early and accurate identification of oral high-grade dysplasia and cancer can help improve patient outcomes. Multimodal optical imaging is an adjunctive diagnostic technique in which autofluorescence imaging is used to identify high-risk regions within the oral cavity, followed by high-resolution microendoscopy to confirm or rule out the presence of neoplasia. Multimodal optical images were obtained from 206 sites in 100 patients. Histologic diagnosis, either from a punch biopsy or an excised surgical specimen, was used as the gold standard for all sites. Histopathologic diagnoses of moderate dysplasia or worse were considered neoplastic. Images from 92 sites in the first 30 patients were used as a training set to develop automated image analysis methods for identification of neoplasia. Diagnostic performance was evaluated prospectively using images from 114 sites in the remaining 70 patients as a test set. In the training set, multimodal optical imaging with automated image analysis correctly classified 95% of nonneoplastic sites and 94% of neoplastic sites. Among the 56 sites in the test set that were biopsied, multimodal optical imaging correctly classified 100% of nonneoplastic sites and 85% of neoplastic sites. Among the 58 sites in the test set that corresponded to a surgical specimen, multimodal imaging correctly classified 100% of nonneoplastic sites and 61% of neoplastic sites. These findings support the potential of multimodal optical imaging to aid in the early detection of oral cancer. Cancer Prev Res; 10(10); 563-70. ©2017 AACR. ©2017 American Association for Cancer Research.

  11. Integral-geometry morphological image analysis

    NASA Astrophysics Data System (ADS)

    Michielsen, K.; De Raedt, H.

    2001-07-01

    This paper reviews a general method to characterize the morphology of two- and three-dimensional patterns in terms of geometrical and topological descriptors. Based on concepts of integral geometry, it involves the calculation of the Minkowski functionals of black-and-white images representing the patterns. The result of this approach is an objective, numerical characterization of a given pattern. We briefly review the basic elements of morphological image processing, a technique to transform images to patterns that are amenable to further morphological image analysis. The image processing technique is applied to electron microscope images of nano-ceramic particles and metal-oxide precipitates. The emphasis of this review is on the practical aspects of the integral-geometry-based morphological image analysis but we discuss its mathematical foundations as well. Applications to simple lattice structures, triply periodic minimal surfaces, and the Klein bottle serve to illustrate the basic steps of the approach. More advanced applications include random point sets, percolation and complex structures found in block copolymers.

  12. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  13. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    PubMed

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  14. Chromatic Image Analysis For Quantitative Thermal Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  15. Systematic infrared image quality improvement using deep learning based techniques

    NASA Astrophysics Data System (ADS)

    Zhang, Huaizhong; Casaseca-de-la-Higuera, Pablo; Luo, Chunbo; Wang, Qi; Kitchin, Matthew; Parmley, Andrew; Monge-Alvarez, Jesus

    2016-10-01

    Infrared thermography (IRT, or thermal video) uses thermographic cameras to detect and record radiation in the longwavelength infrared range of the electromagnetic spectrum. It allows sensing environments beyond the visual perception limitations, and thus has been widely used in many civilian and military applications. Even though current thermal cameras are able to provide high resolution and bit-depth images, there are significant challenges to be addressed in specific applications such as poor contrast, low target signature resolution, etc. This paper addresses quality improvement in IRT images for object recognition. A systematic approach based on image bias correction and deep learning is proposed to increase target signature resolution and optimise the baseline quality of inputs for object recognition. Our main objective is to maximise the useful information on the object to be detected even when the number of pixels on target is adversely small. The experimental results show that our approach can significantly improve target resolution and thus helps making object recognition more efficient in automatic target detection/recognition systems (ATD/R).

  16. EPA guidance on improving the image of psychiatry.

    PubMed

    Möller-Leimkühler, A M; Möller, H-J; Maier, W; Gaebel, W; Falkai, P

    2016-03-01

    This paper explores causes, explanations and consequences of the negative image of psychiatry and develops recommendations for improvement. It is primarily based on a WPA guidance paper on how to combat the stigmatization of psychiatry and psychiatrists and a Medline search on related publications since 2010. Furthermore, focussing on potential causes and explanations, the authors performed a selective literature search regarding additional image-related issues such as mental health literacy and diagnostic and treatment issues. Underestimation of psychiatry results from both unjustified prejudices of the general public, mass media and healthcare professionals and psychiatry's own unfavourable coping with external and internal concerns. Issues related to unjustified devaluation of psychiatry include overestimation of coercion, associative stigma, lack of public knowledge, need to simplify complex mental issues, problem of the continuum between normality and psychopathology, competition with medical and non-medical disciplines and psychopharmacological treatment. Issues related to psychiatry's own contribution to being underestimated include lack of a clear professional identity, lack of biomarkers supporting clinical diagnoses, limited consensus about best treatment options, lack of collaboration with other medical disciplines and low recruitment rates among medical students. Recommendations are proposed for creating and representing a positive self-concept with different components. The negative image of psychiatry is not only due to unfavourable communication with the media, but is basically a problem of self-conceptualization. Much can be improved. However, psychiatry will remain a profession with an exceptional position among the medical disciplines, which should be seen as its specific strength.

  17. Advanced automated char image analysis techniques

    SciTech Connect

    Tao Wu; Edward Lester; Michael Cloke

    2006-05-15

    Char morphology is an important characteristic when attempting to understand coal behavior and coal burnout. In this study, an augmented algorithm has been proposed to identify char types using image analysis. On the basis of a series of image processing steps, a char image is singled out from the whole image, which then allows the important major features of the char particle to be measured, including size, porosity, and wall thickness. The techniques for automated char image analysis have been tested against char images taken from ICCP Char Atlas as well as actual char particles derived from pyrolyzed char samples. Thirty different chars were prepared in a drop tube furnace operating at 1300{sup o}C, 1% oxygen, and 100 ms from 15 different world coals sieved into two size fractions (53-75 and 106-125 {mu}m). The results from this automated technique are comparable with those from manual analysis, and the additional detail from the automated sytem has potential use in applications such as combustion modeling systems. Obtaining highly detailed char information with automated methods has traditionally been hampered by the difficulty of automatic recognition of individual char particles. 20 refs., 10 figs., 3 tabs.

  18. Computer assisted analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Sawicki, M.; Munhutu, P.; DaPonte, J.; Caragianis-Broadbridge, C.; Lehman, A.; Sadowski, T.; Garcia, E.; Heyden, C.; Mirabelle, L.; Benjamin, P.

    2009-01-01

    The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology1 . Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful4.

  19. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains.

  20. An improved hyperspectral image classification approach based on ISODATA and SKR method

    NASA Astrophysics Data System (ADS)

    Hong, Pu; Ye, Xiao-feng; Yu, Hui; Zhang, Zhi-jie; Cai, Yu-fei; Tang, Xin; Tang, Wei; Wang, Chensheng

    2016-11-01

    Hyper-spectral images can not only provide spatial information but also a wealth of spectral information. A short list of applications includes environmental mapping, global change research, geological research, wetlands mapping, assessment of trafficability, plant and mineral identification and abundance estimation, crop analysis, and bathymetry. A crucial aspect of hyperspectral image analysis is the identification of materials present in an object or scene being imaged. Classification of a hyperspectral image sequence amounts to identifying which pixels contain various spectrally distinct materials that have been specified by the user. Several techniques for classification of multi-hyperspectral pixels have been used from minimum distance and maximum likelihood classifiers to correlation matched filter-based approaches such as spectral signature matching and the spectral angle mapper. In this paper, an improved hyperspectral images classification algorithm is proposed. In the proposed method, an improved similarity measurement method is applied, in which both the spectrum similarity and space similarity are considered. We use two different weighted matrix to estimate the spectrum similarity and space similarity between two pixels, respectively. And then whether these two pixels represent the same material can be determined. In order to reduce the computational cost the wavelet transform is also applied prior to extract the spectral and space features. The proposed method is tested using hyperspectral imagery collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory. Experimental results the efficiency of this new method on hyperspectral images associated with space object material identification.

  1. Image quality improvement in MDCT cardiac imaging via SMART-RECON method

    NASA Astrophysics Data System (ADS)

    Li, Yinsheng; Cao, Ximiao; Xing, Zhanfeng; Sun, Xuguang; Hsieh, Jiang; Chen, Guang-Hong

    2017-03-01

    Coronary CT angiography (CCTA) is a challenging imaging task currently limited by the achievable temporal resolution of modern Multi-Detector CT (MDCT) scanners. In this paper, the recently proposed SMARTRECON method has been applied in MDCT-based CCTA imaging to improve the image quality without any prior knowledge of cardiac motion. After the prospective ECG-gated data acquisition from a short-scan angular span, the acquired data were sorted into several sub-sectors of view angles; each corresponds to a 1/4th of the short-scan angular range. Information of the cardiac motion was thus encoded into the data in each view angle sub-sector. The SMART-RECON algorithm was then applied to jointly reconstruct several image volumes, each of which is temporally consistent with the data acquired in the corresponding view angle sub-sector. Extensive numerical simulations were performed to validate the proposed technique and investigate the performance dependence.

  2. On Two-Dimensional ARMA Models for Image Analysis.

    DTIC Science & Technology

    1980-03-24

    2-D ARMA models for image analysis . Particular emphasis is placed on restoration of noisy images using 2-D ARMA models. Computer results are...is concluded that the models are very effective linear models for image analysis . (Author)

  3. Improving image classification in a complex wetland ecosystem through image fusion techniques

    NASA Astrophysics Data System (ADS)

    Kumar, Lalit; Sinha, Priyakant; Taylor, Subhashni

    2014-01-01

    The aim of this study was to evaluate the impact of image fusion techniques on vegetation classification accuracies in a complex wetland system. Fusion of panchromatic (PAN) and multispectral (MS) Quickbird satellite imagery was undertaken using four image fusion techniques: Brovey, hue-saturation-value (HSV), principal components (PC), and Gram-Schmidt (GS) spectral sharpening. These four fusion techniques were compared in terms of their mapping accuracy to a normal MS image using maximum-likelihood classification (MLC) and support vector machine (SVM) methods. Gram-Schmidt fusion technique yielded the highest overall accuracy and kappa value with both MLC (67.5% and 0.63, respectively) and SVM methods (73.3% and 0.68, respectively). This compared favorably with the accuracies achieved using the MS image. Overall, improvements of 4.1%, 3.6%, 5.8%, 5.4%, and 7.2% in overall accuracies were obtained in case of SVM over MLC for Brovey, HSV, GS, PC, and MS images, respectively. Visual and statistical analyses of the fused images showed that the Gram-Schmidt spectral sharpening technique preserved spectral quality much better than the principal component, Brovey, and HSV fused images. Other factors, such as the growth stage of species and the presence of extensive background water in many parts of the study area, had an impact on classification accuracies.

  4. Visible and infrared reflectance imaging spectroscopy of paintings: pigment mapping and improved infrared reflectography

    NASA Astrophysics Data System (ADS)

    Delaney, John K.; Zeibel, Jason G.; Thoury, Mathieu; Littleton, Roy; Morales, Kathryn M.; Palmer, Michael; de la Rie, E. René

    2009-07-01

    Reflectance imaging spectroscopy, the collection of images in narrow spectral bands, has been developed for remote sensing of the Earth. In this paper we present findings on the use of imaging spectroscopy to identify and map artist pigments as well as to improve the visualization of preparatory sketches. Two novel hyperspectral cameras, one operating from the visible to near-infrared (VNIR) and the other in the shortwave infrared (SWIR), have been used to collect diffuse reflectance spectral image cubes on a variety of paintings. The resulting image cubes (VNIR 417 to 973 nm, 240 bands, and SWIR 970 to 1650 nm, 85 bands) were calibrated to reflectance and the resulting spectra compared with results from a fiber optics reflectance spectrometer (350 to 2500 nm). The results show good agreement between the spectra acquired with the hyperspectral cameras and those from the fiber reflectance spectrometer. For example, the primary blue pigments and their distribution in Picasso's Harlequin Musician (1924) are identified from the reflectance spectra and agree with results from X-ray fluorescence data and dispersed sample analysis. False color infrared reflectograms, obtained from the SWIR hyperspectral images, of extensively reworked paintings such as Picasso's The Tragedy (1903) are found to give improved visualization of changes made by the artist. These results show that including the NIR and SWIR spectral regions along with the visible provides for a more robust identification and mapping of artist pigments than using visible imaging spectroscopy alone.

  5. Selecting an image analysis minicomputer system

    NASA Technical Reports Server (NTRS)

    Danielson, R.

    1981-01-01

    Factors to be weighed when selecting a minicomputer system as the basis for an image analysis computer facility vary depending on whether the user organization procures a new computer or selects an existing facility to serve as an image analysis host. Some conditions not directly related to hardware or software should be considered such as the flexibility of the computer center staff, their encouragement of innovation, and the availability of the host processor to a broad spectrum of potential user organizations. Particular attention must be given to: image analysis software capability; the facilities of a potential host installation; the central processing unit; the operating system and languages; main memory; disk storage; tape drives; hardcopy output; and other peripherals. The operational environment, accessibility; resource limitations; and operational supports are important. Charges made for program execution and data storage must also be examined.

  6. Improving reliability of live/dead cell counting through automated image mosaicing.

    PubMed

    Piccinini, Filippo; Tesei, Anna; Paganelli, Giulia; Zoli, Wainer; Bevilacqua, Alessandro

    2014-12-01

    Cell counting is one of the basic needs of most biological experiments. Numerous methods and systems have been studied to improve the reliability of counting. However, at present, manual cell counting performed with a hemocytometer still represents the gold standard, despite several problems limiting reproducibility and repeatability of the counts and, at the end, jeopardizing their reliability in general. We present our own approach based on image processing techniques to improve counting reliability. It works in two stages: first building a high-resolution image of the hemocytometer's grid, then counting the live and dead cells by tagging the image with flags of different colours. In particular, we introduce GridMos (http://sourceforge.net/p/gridmos), a fully-automated mosaicing method to obtain a mosaic representing the whole hemocytometer's grid. In addition to offering more significant statistics, the mosaic "freezes" the culture status, thus permitting analysis by more than one operator. Finally, the mosaic achieved can thus be tagged by using an image editor, thus markedly improving counting reliability. The experiments performed confirm the improvements brought about by the proposed counting approach in terms of both reproducibility and repeatability, also suggesting the use of a mosaic of an entire hemocytometer's grid, then labelled trough an image editor, as the best likely candidate for the new gold standard method in cell counting. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Quantitative sonographic image analysis for hepatic nodules: a pilot study.

    PubMed

    Matsumoto, Naoki; Ogawa, Masahiro; Takayasu, Kentaro; Hirayama, Midori; Miura, Takao; Shiozawa, Katsuhiko; Abe, Masahisa; Nakagawara, Hiroshi; Moriyama, Mitsuhiko; Udagawa, Seiichi

    2015-10-01

    The aim of this study was to investigate the feasibility of quantitative image analysis to differentiate hepatic nodules on gray-scale sonographic images. We retrospectively evaluated 35 nodules from 31 patients with hepatocellular carcinoma (HCC), 60 nodules from 58 patients with liver hemangioma, and 22 nodules from 22 patients with liver metastasis. Gray-scale sonographic images were evaluated with subjective judgment and image analysis using ImageJ software. Reviewers classified the shape of nodules as irregular or round, and the surface of nodules as rough or smooth. Circularity values were lower in the irregular group than in the round group (median 0.823, 0.892; range 0.641-0.915, 0.784-0.932, respectively; P = 3.21 × 10(-10)). Solidity values were lower in the rough group than in the smooth group (median 0.957, 0.968; range 0.894-0.986, 0.933-0.988, respectively; P = 1.53 × 10(-4)). The HCC group had higher circularity and solidity values than the hemangioma group. The HCC and liver metastasis groups had lower median, mean, modal, and minimum gray values than the hemangioma group. Multivariate analysis showed circularity [standardized odds ratio (OR), 2.077; 95 % confidential interval (CI) = 1.295-3.331; P = 0.002] and minimum gray value (OR 0.482; 95 % CI = 0.956-0.990; P = 0.001) as factors predictive of malignancy. The combination of subjective judgment and image analysis provided 58.3 % sensitivity and 89.5 % specificity with AUC = 0.739, representing an improvement over subjective judgment alone (68.4 % sensitivity, 75.0 % specificity, AUC = 0.701) (P = 0.008). Quantitative image analysis for ultrasonic images of hepatic nodules may correlate with subjective judgment in predicting malignancy.

  8. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis.

    PubMed

    Hakime, Antoine; Yevich, Steven; Tselikas, Lambros; Deschamps, Frederic; Petrover, David; De Baere, Thierry

    2017-05-01

    To assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US). MWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated. A total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time required for image fusion processing and tumors' identification averaged 10 ± 2.1 min (range 5-14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0-2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1-5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications. Fusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.

  9. Probabilistic region growing method for improving magnetic resonance image segmentation

    NASA Astrophysics Data System (ADS)

    Zanaty, E. A.; Asaad, Ayman

    2013-12-01

    In this paper, we propose a new region growing algorithm called 'probabilistic region growing (PRG)' which could improve the magnetic resonance image (MRI) segmentation. The proposed approach includes a threshold based on estimating the probability of pixel intensities of a given image. This threshold uses a homogeneity criterion which is obtained automatically from characteristics of the regions. The homogeneity criterion will be calculated for each pixel as well as the probability of pixel value. The proposed PRG algorithm selects the pixels sequentially in a random walk starting at the seed point, and the homogeneity criterion is updated continuously. The proposed PRG algorithm is applied to the challenging applications: grey matter/white matter segmentation in MRI data sets. The experimental results compared with other segmentation techniques show that the proposed PRG method produces more accurate and stable results.

  10. Improvement of heterogeneous deformation measurement in digital image correlation

    NASA Astrophysics Data System (ADS)

    Shen, Huan; Liang, Zhonghan; Cheng, Baishun

    2017-07-01

    Heterogeneous deformation measurement using traditional digital image correlation (DIC) has times error of homogeneous deformation due to localized complexity. In case of small strain window, displacement field error will substantially corrupt the derived strain. On the contrary, large strain window will induce a reasonable information reduction in particular of heterogeneous deformation. In this paper, a novel parameter was put forward to correct displacement field and select strain subset size dynamically. This parameter was determined by localized displacement field that is called the localized displacement non-uniform intensity (λ). In addition, there is a simple and effective method to eliminate the rigid body rotation impact on strain measurement. A series of speckle images containing different heterogeneous deformation are simulated finally. Results show that the accuracy on the displacement and strain field can be substantially improved especially in heterogeneous deformation fields.

  11. Image analysis of insulation mineral fibres.

    PubMed

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view.

  12. Conducting a SWOT Analysis for Program Improvement

    ERIC Educational Resources Information Center

    Orr, Betsy

    2013-01-01

    A SWOT (strengths, weaknesses, opportunities, and threats) analysis of a teacher education program, or any program, can be the driving force for implementing change. A SWOT analysis is used to assist faculty in initiating meaningful change in a program and to use the data for program improvement. This tool is useful in any undergraduate or degree…

  13. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  14. Improving PET spatial resolution and detectability for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Bal, H.; Guerin, L.; Casey, M. E.; Conti, M.; Eriksson, L.; Michel, C.; Fanti, S.; Pettinato, C.; Adler, S.; Choyke, P.

    2014-08-01

    Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%.

  15. Automated eXpert Spectral Image Analysis

    SciTech Connect

    Keenan, Michael R.

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limted number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X-ray Fluorescence

  16. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future.

  17. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  18. Measurements and analysis of active/passive multispectral imaging

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Hamoir, Dominique; Steinvall, Ove; Larsson, Hâkan; Amselem, Elias; Lutzmann, Peter; Repasi, Endre; Göhler, Benjamin; Barbé, Stéphane; Vaudelin, Olivier; Fracès, Michel; Tanguy, Bernard; Thouin, Emmanuelle

    2013-10-01

    This paper describes a data collection on passive and active imaging and the preliminary analysis. It is part of an ongoing work on active and passive imaging for target identification using different wavelength bands. We focus on data collection at NIR-SWIR wavelengths but we also include the visible and the thermal region. Active imaging in NIRSWIR will support the passive imaging by eliminating shadows during day-time and allow night operation. Among the applications that are most likely for active multispectral imaging, we focus on long range human target identification. We also study the combination of active and passive sensing. The target scenarios of interest include persons carrying different objects and their associated activities. We investigated laser imaging for target detection and classification up to 1 km assuming that another cueing sensor - passive EO and/or radar - is available for target acquisition and detection. Broadband or multispectral operation will reduce the effects of target speckle and atmospheric turbulence. Longer wavelengths will improve performance in low visibility conditions due to haze, clouds and fog. We are currently performing indoor and outdoor tests to further investigate the target/background phenomena that are emphasized in these wavelengths. We also investigate how these effects can be used for target identification and image fusion. Performed field tests and the results of preliminary data analysis are reported.

  19. Improved recognition of figures containing fluorescence microscope images in online journal articles using graphical models.

    PubMed

    Qian, Yuntao; Murphy, Robert F

    2008-02-15

    There is extensive interest in automating the collection, organization and analysis of biological data. Data in the form of images in online literature present special challenges for such efforts. The first steps in understanding the contents of a figure are decomposing it into panels and determining the type of each panel. In biological literature, panel types include many kinds of images collected by different techniques, such as photographs of gels or images from microscopes. We have previously described the SLIF system (http://slif.cbi.cmu.edu) that identifies panels containing fluorescence microscope images among figures in online journal articles as a prelude to further analysis of the subcellular patterns in such images. This system contains a pretrained classifier that uses image features to assign a type (class) to each separate panel. However, the types of panels in a figure are often correlated, so that we can consider the class of a panel to be dependent not only on its own features but also on the types of the other panels in a figure. In this article, we introduce the use of a type of probabilistic graphical model, a factor graph, to represent the structured information about the images in a figure, and permit more robust and accurate inference about their types. We obtain significant improvement over results for considering panels separately. The code and data used for the experiments described here are available from http://murphylab.web.cmu.edu/software.

  20. Actual drawing of histological images improves knowledge retention.

    PubMed

    Balemans, Monique C M; Kooloos, Jan G M; Donders, A Rogier T; Van der Zee, Catharina E E M

    2016-01-01

    Medical students have to process a large amount of information during the first years of their study, which has to be retained over long periods of nonuse. Therefore, it would be beneficial when knowledge is gained in a way that promotes long-term retention. Paper-and-pencil drawings for the uptake of form-function relationships of basic tissues has been a teaching tool for a long time, but now seems to be redundant with virtual microscopy on computer-screens and printers everywhere. Several studies claimed that, apart from learning from pictures, actual drawing of images significantly improved knowledge retention. However, these studies applied only immediate post-tests. We investigated the effects of actual drawing of histological images, using randomized cross-over design and different retention periods. The first part of the study concerned esophageal and tracheal epithelium, with 384 medical and biomedical sciences students randomly assigned to either the drawing or the nondrawing group. For the second part of the study, concerning heart muscle cells, students from the previous drawing group were now assigned to the nondrawing group and vice versa. One, four, and six weeks after the experimental intervention, the students were given a free recall test and a questionnaire or drawing exercise, to determine the amount of knowledge retention. The data from this study showed that knowledge retention was significantly improved in the drawing groups compared with the nondrawing groups, even after four or six weeks. This suggests that actual drawing of histological images can be used as a tool to improve long-term knowledge retention.

  1. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2007-11-02

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  2. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  3. Improved document image segmentation algorithm using multiresolution morphology

    NASA Astrophysics Data System (ADS)

    Bukhari, Syed Saqib; Shafait, Faisal; Breuel, Thomas M.

    2011-01-01

    Page segmentation into text and non-text elements is an essential preprocessing step before optical character recognition (OCR) operation. In case of poor segmentation, an OCR classification engine produces garbage characters due to the presence of non-text elements. This paper describes modifications to the text/non-text segmentation algorithm presented by Bloomberg,1 which is also available in his open-source Leptonica library.2The modifications result in significant improvements and achieved better segmentation accuracy than the original algorithm for UW-III, UNLV, ICDAR 2009 page segmentation competition test images and circuit diagram datasets.

  4. Improved proton computed tomography by dual modality image reconstruction

    SciTech Connect

    Hansen, David C. Bassler, Niels; Petersen, Jørgen Breede Baltzer; Sørensen, Thomas Sangild

    2014-03-15

    Purpose: Proton computed tomography (CT) is a promising image modality for improving the stopping power estimates and dose calculations for particle therapy. However, the finite range of about 33 cm of water of most commercial proton therapy systems limits the sites that can be scanned from a full 360° rotation. In this paper the authors propose a method to overcome the problem using a dual modality reconstruction (DMR) combining the proton data with a cone-beam x-ray prior. Methods: A Catphan 600 phantom was scanned using a cone beam x-ray CT scanner. A digital replica of the phantom was created in the Monte Carlo code Geant4 and a 360° proton CT scan was simulated, storing the entrance and exit position and momentum vector of every proton. Proton CT images were reconstructed using a varying number of angles from the scan. The proton CT images were reconstructed using a constrained nonlinear conjugate gradient algorithm, minimizing total variation and the x-ray CT prior while remaining consistent with the proton projection data. The proton histories were reconstructed along curved cubic-spline paths. Results: The spatial resolution of the cone beam CT prior was retained for the fully sampled case and the 90° interval case, with the MTF = 0.5 (modulation transfer function) ranging from 5.22 to 5.65 linepairs/cm. In the 45° interval case, the MTF = 0.5 dropped to 3.91 linepairs/cm For the fully sampled DMR, the maximal root mean square (RMS) error was 0.006 in units of relative stopping power. For the limited angle cases the maximal RMS error was 0.18, an almost five-fold improvement over the cone beam CT estimate. Conclusions: Dual modality reconstruction yields the high spatial resolution of cone beam x-ray CT while maintaining the improved stopping power estimation of proton CT. In the case of limited angles, the use of prior image proton CT greatly improves the resolution and stopping power estimate, but does not fully achieve the quality of a 360

  5. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  6. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results

  7. A rate adaptive control method for Improving the imaging speed of atomic force microscopy.

    PubMed

    Wang, Yanyan; Wan, Jiahuan; Hu, Xiaodong; Xu, Linyan; Wu, Sen; Hu, Xiaotang

    2015-08-01

    A simple rate adaptive control method is proposed to improve the imaging speed of the atomic force microscope (AFM) in the paper. Conventionally, the probe implemented on the AFM scans the sample surface at a constant rate, resulting in low time efficiency. Numerous attempts have been made to realize high-speed AFMs, while little efforts are put into changing the constant-rate scanning. Here we report a rate adaptive control method based on variable-rate scanning. The method automatically sets the imaging speed for the x scanner through the analysis of the tracking errors in the z direction at each scanning point, thus improving the dynamic tracking performance of the z scanner. The development and functioning of the rate adaptive method are demonstrated, as well as how the approach significantly achieves faster scans and a higher resolution AFM imaging.

  8. The electronic image stabilization technology research based on improved optical-flow motion vector estimation

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Ji, Ming; Zhang, Ying; Jiang, Wentao; Lu, Xiaoyan; Wang, Jiaoying; Yang, Heng

    2016-01-01

    The electronic image stabilization technology based on improved optical-flow motion vector estimation technique can effectively improve the non normal shift, such as jitter, rotation and so on. Firstly, the ORB features are extracted from the image, a set of regions are built on these features; Secondly, the optical-flow vector is computed in the feature regions, in order to reduce the computational complexity, the multi resolution strategy of Pyramid is used to calculate the motion vector of the frame; Finally, qualitative and quantitative analysis of the effect of the algorithm is carried out. The results show that the proposed algorithm has better stability compared with image stabilization based on the traditional optical-flow motion vector estimation method.

  9. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  11. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  12. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  13. Image is everything. New York Hospital's institution-wide digital imaging lowers costs and improves care.

    PubMed

    Scalzi, G; Sostman, H D

    1998-03-01

    The New York Hospital-Cornell Medical Center, Manhattan: a 1,242-licensed bed voluntary non-profit hospital. Conventional imaging technology created expensive logistical problems between the radiology facility and the Greenberg Pavilion, a new 850,000 square foot, 11-floor inpatient tower, located two city blocks away. Use a picture archiving communications system (PACS) to transmit, store and archive digital images. Increased staff efficiencies, improved patient care and reduced costs. "Many years of planning and a full commitment from the staff."

  14. An improved PolSAR image speckle reduction algorithm based on LMMSE and RICA

    NASA Astrophysics Data System (ADS)

    Jiang, Chang; He, Xiufeng

    2017-07-01

    Although the linear minimum mean square error (LMMSE) filter removes speckle in polarimetric synthetic aperture radar (PolSAR) images, it has the disadvantage of losing edge detail. In this paper, we propose a new filter based on robust independent component analysis (RICA) and LMMSE. This approach describes edge features in a span image by selecting the adaptive direction window and calculating the edge weight value of the spatial domain, and improves the objective function by using a step polynomial to extract the estimate of the source image with minimum noise. This technique preserves not only the edge information in the images, but also the polarimetric information. Experiments were conducted on the NASA/JPL AIRSAR L-band of the San Francisco area, and evaluated by means of the speckle reduction index and the edge preservation index. The experimental results show that the proposed method effectively reduces speckle, retains edges, and preserves the polarimetric scattering mechanisms.

  15. Improved interpretation of gated cardiac images by use of digital filters

    SciTech Connect

    Miller, T.R.; Goldman, K.J.; Epstein, D.M.; Biello, D.R.; Sampathkumaran, K.S.; Kumar, B.; Siegel, B.A.

    1984-09-01

    The authors describe a digital filter that greatly enhances the quality of gated cardiac blood-pool images. Spatial filtering is accomplished with a minimum-mean-square-error (Wiener) filter incorporating measured camera blur and Poisson noise statistics. A low-pass temporal filter is then applied to each pixel, with the cutoff frequency determined from measurements of frequency spectra in 20 patients. This filter was evaluated in routine clinical use for nearly one year and found to significantly improve chamber definition, delineate wall motion abnormalities better, and reduce noise. To quantitatively assess the effect of the filter on image interpretation, four experienced observers evaluated wall motion in a series of mathematically simulated left ventricular images. ROC analysis revealed that accuracy in assessing wall motion was significantly greater with the filtered images.

  16. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  17. Piecewise flat embeddings for hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Hayes, Tyler L.; Meinhold, Renee T.; Hamilton, John F.; Cahill, Nathan D.

    2017-05-01

    Graph-based dimensionality reduction techniques such as Laplacian Eigenmaps (LE), Local Linear Embedding (LLE), Isometric Feature Mapping (ISOMAP), and Kernel Principal Components Analysis (KPCA) have been used in a variety of hyperspectral image analysis applications for generating smooth data embeddings. Recently, Piecewise Flat Embeddings (PFE) were introduced in the computer vision community as a technique for generating piecewise constant embeddings that make data clustering / image segmentation a straightforward process. In this paper, we show how PFE arises by modifying LE, yielding a constrained ℓ1-minimization problem that can be solved iteratively. Using publicly available data, we carry out experiments to illustrate the implications of applying PFE to pixel-based hyperspectral image clustering and classification.

  18. A method for improved clustering and classification of microscopy images using quantitative co-localization coefficients

    PubMed Central

    2012-01-01

    Background The localization of proteins to specific subcellular structures in eukaryotic cells provides important information with respect to their function. Fluorescence microscopy approaches to determine localization distribution have proved to be an essential tool in the characterization of unknown proteins, and are now particularly pertinent as a result of the wide availability of fluorescently-tagged constructs and antibodies. However, there are currently very few image analysis options able to effectively discriminate proteins with apparently similar distributions in cells, despite this information being important for protein characterization. Findings We have developed a novel method for combining two existing image analysis approaches, which results in highly efficient and accurate discrimination of proteins with seemingly similar distributions. We have combined image texture-based analysis with quantitative co-localization coefficients, a method that has traditionally only been used to study the spatial overlap between two populations of molecules. Here we describe and present a novel application for quantitative co-localization, as applied to the study of Rab family small GTP binding proteins localizing to the endomembrane system of cultured cells. Conclusions We show how quantitative co-localization can be used alongside texture feature analysis, resulting in improved clustering of microscopy images. The use of co-localization as an additional clustering parameter is non-biased and highly applicable to high-throughput image data sets. PMID:22681635

  19. Improving cross-modal face recognition using polarimetric imaging.

    PubMed

    Short, Nathaniel; Hu, Shuowen; Gurram, Prudhvi; Gurton, Kristan; Chan, Alex

    2015-03-15

    We investigate the performance of polarimetric imaging in the long-wave infrared (LWIR) spectrum for cross-modal face recognition. For this work, polarimetric imagery is generated as stacks of three components: the conventional thermal intensity image (referred to as S0), and the two Stokes images, S1 and S2, which contain combinations of different polarizations. The proposed face recognition algorithm extracts and combines local gradient magnitude and orientation information from S0, S1, and S2 to generate a robust feature set that is well-suited for cross-modal face recognition. Initial results show that polarimetric LWIR-to-visible face recognition achieves an 18% increase in Rank-1 identification rate compared to conventional LWIR-to-visible face recognition. We conclude that a substantial improvement in automatic face recognition performance can be achieved by exploiting the polarization-state of radiance, as compared to using conventional thermal imagery.

  20. Improving image quality in laboratory x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    De Marco, F.; Marschner, M.; Birnbacher, L.; Viermetz, M.; Noël, P.; Herzen, J.; Pfeiffer, F.

    2017-03-01

    Grating-based X-ray phase-contrast (gbPC) is known to provide significant benefits for biomedical imaging. To investigate these benefits, a high-sensitivity gbPC micro-CT setup for small (≍ 5 cm) biological samples has been constructed. Unfortunately, high differential-phase sensitivity leads to an increased magnitude of data processing artifacts, limiting the quality of tomographic reconstructions. Most importantly, processing of phase-stepping data with incorrect stepping positions can introduce artifacts resembling Moiré fringes to the projections. Additionally, the focal spot size of the X-ray source limits resolution of tomograms. Here we present a set of algorithms to minimize artifacts, increase resolution and improve visual impression of projections and tomograms from the examined setup. We assessed two algorithms for artifact reduction: Firstly, a correction algorithm exploiting correlations of the artifacts and differential-phase data was developed and tested. Artifacts were reliably removed without compromising image data. Secondly, we implemented a new algorithm for flatfield selection, which was shown to exclude flat-fields with strong artifacts. Both procedures successfully improved image quality of projections and tomograms. Deconvolution of all projections of a CT scan can minimize blurring introduced by the finite size of the X-ray source focal spot. Application of the Richardson-Lucy deconvolution algorithm to gbPC-CT projections resulted in an improved resolution of phase-contrast tomograms. Additionally, we found that nearest-neighbor interpolation of projections can improve the visual impression of very small features in phase-contrast tomograms. In conclusion, we achieved an increase in image resolution and quality for the investigated setup, which may lead to an improved detection of very small sample features, thereby maximizing the setup's utility.

  1. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  2. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  3. Digital image analysis of haematopoietic clusters.

    PubMed

    Benzinou, A; Hojeij, Y; Roudot, A-C

    2005-02-01

    Counting and differentiating cell clusters is a tedious task when performed with a light microscope. Moreover, biased counts and interpretation are difficult to avoid because of the difficulties to evaluate the limits between different types of clusters. Presented here, is a computer-based application able to solve these problems. The image analysis system is entirely automatic, from the stage screening, to the statistical analysis of the results of each experimental plate. Good correlations are found with measurements made by a specialised technician.

  4. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  5. Multispectral Imager With Improved Filter Wheel and Optics

    NASA Technical Reports Server (NTRS)

    Bremer, James C.

    2007-01-01

    Figure 1 schematically depicts an improved multispectral imaging system of the type that utilizes a filter wheel that contains multiple discrete narrow-band-pass filters and that is rotated at a constant high speed to acquire images in rapid succession in the corresponding spectral bands. The improvement, relative to prior systems of this type, consists of the measures taken to prevent the exposure of a focal-plane array (FPA) of photodetectors to light in more than one spectral band at any given time and to prevent exposure of the array to any light during readout. In prior systems, these measures have included, variously the use of mechanical shutters or the incorporation of wide opaque sectors (equivalent to mechanical shutters) into filter wheels. These measures introduce substantial dead times into each operating cycle intervals during which image information cannot be collected and thus incoming light is wasted. In contrast, the present improved design does not involve shutters or wide opaque sectors, and it reduces dead times substantially. The improved multispectral imaging system is preceded by an afocal telescope and includes a filter wheel positioned so that its rotation brings each filter, in its turn, into the exit pupil of the telescope. The filter wheel contains an even number of narrow-band-pass filters separated by narrow, spoke-like opaque sectors. The geometric width of each filter exceeds the cross-sectional width of the light beam coming out of the telescope. The light transmitted by the sequence of narrow-band filters is incident on a dichroic beam splitter that reflects in a broad shorter-wavelength spectral band that contains half of the narrow bands and transmits in a broad longer-wavelength spectral band that contains the other half of the narrow spectral bands. The filters are arranged on the wheel so that if the pass band of a given filter is in the reflection band of the dichroic beam splitter, then the pass band of the adjacent filter

  6. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  7. Visualization of parameter space for image analysis.

    PubMed

    Pretorius, A Johannes; Bray, Mark-Anthony P; Carpenter, Anne E; Ruddle, Roy A

    2011-12-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.

  8. Visualization of Parameter Space for Image Analysis

    PubMed Central

    Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.

    2013-01-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361

  9. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  10. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  11. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy

    PubMed Central

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B.

    2015-01-01

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution. PMID:25665933

  12. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B.

    2015-02-01

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.

  13. Improving image contrast and material discrimination with nonlinear response in bimodal atomic force microscopy.

    PubMed

    Forchheimer, Daniel; Forchheimer, Robert; Haviland, David B

    2015-02-10

    Atomic force microscopy has recently been extented to bimodal operation, where increased image contrast is achieved through excitation and measurement of two cantilever eigenmodes. This enhanced material contrast is advantageous in analysis of complex heterogeneous materials with phase separation on the micro or nanometre scale. Here we show that much greater image contrast results from analysis of nonlinear response to the bimodal drive, at harmonics and mixing frequencies. The amplitude and phase of up to 17 frequencies are simultaneously measured in a single scan. Using a machine-learning algorithm we demonstrate almost threefold improvement in the ability to separate material components of a polymer blend when including this nonlinear response. Beyond the statistical analysis performed here, analysis of nonlinear response could be used to obtain quantitative material properties at high speeds and with enhanced resolution.

  14. Postoperative increase in cerebral white matter fractional anisotropy on diffusion tensor magnetic resonance imaging is associated with cognitive improvement after uncomplicated carotid endarterectomy: tract-based spatial statistics analysis.

    PubMed

    Sato, Yuiko; Ito, Kenji; Ogasawara, Kuniaki; Sasaki, Makoto; Kudo, Kohsuke; Murakami, Toshiyuki; Nanba, Takamasa; Nishimoto, Hideaki; Yoshida, Kenji; Kobayashi, Masakazu; Kubo, Yoshitaka; Mase, Tomohiko; Ogawa, Akira

    2013-10-01

    Carotid endarterectomy (CEA) might improve cognitive function. Fractional anisotropy (FA) values in the cerebral white matter derived from diffusion tensor magnetic resonance imaging (DTI) correlate with cognitive function in patients with various central nervous system diseases. To use tract-based spatial statistics to determine whether postoperative changes of FA values in the cerebral white matter derived from DTI are associated with cognitive improvement after uncomplicated CEA. In 80 patients undergoing CEA for ipsilateral internal carotid artery stenosis (≥70%), FA values in the cerebral white matter were derived from DTI before and 1 month after surgery and were analyzed by using tract-based spatial statistics. Neuropsychological testing, consisting of the Wechsler Adult Intelligence Scale Revised, the Wechsler Memory Scale and the Rey-Osterreith Complex Figure test, was also performed preoperatively and after the first postoperative month. Based on the neuropsychological assessments, 11 (14%) patients were defined as having postoperatively improved cognition. The difference between the 2 mean FA values (postoperative values minus preoperative values) in the cerebral hemisphere ipsilateral to surgery was significantly associated with postoperative cognitive improvement (95% confidence intervals, 2.632-9.877; P = .008). White matter FA values in patients with postoperative cognitive improvement were significantly increased after surgery in the whole ipsilateral cerebral hemisphere, in the contralateral anterior cerebral artery territory, and in the watershed zone between the contralateral anterior and middle cerebral arteries. Postoperative increase in cerebral white matter FA on DTI is associated with cognitive improvement after uncomplicated CEA.

  15. Note: An improved 3D imaging system for electron-electron coincidence measurements

    SciTech Connect

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  16. Creating the "desired mindset": Philip Morris's efforts to improve its corporate image among women.

    PubMed

    McDaniel, Patricia A; Malone, Ruth E

    2009-01-01

    Through analysis of tobacco company documents, we explored how and why Philip Morris sought to enhance its corporate image among American women. Philip Morris regarded women as an influential political group. To improve its image among women, while keeping tobacco off their organizational agendas, the company sponsored women's groups and programs. It also sought to appeal to women it defined as "active moms" by advertising its commitment to domestic violence victims. It was more successful in securing women's organizations as allies than active moms. Increasing tobacco's visibility as a global women's health issue may require addressing industry influence.

  17. Methods of Improving a Digital Image Having White Zones

    NASA Technical Reports Server (NTRS)

    Wodell, Glenn A. (Inventor); Jobson, Daniel J. (Inventor); Rahman, Zia-Ur (Inventor)

    2005-01-01

    The present invention is a method of processing a digital image that is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I,(x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with SIGMA (sup N)(sub n=1)W(sub n)(log I(sub i)(x,y)-log[I(sub i)(x,y)*F(sub n)(x,y)]), i = 1,...,S where W(sub n) is a weighting factor, "*" is the convolution operator and S is the total number of unique spectral bands. For each n, the function F(sub n)(x,y) is a unique surround function applied to each position (x,y) and N is the total number of unique surround functions. Each unique surround function is scaled to improve some aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band of the image is then filtered with a filter function to generate a filtered intensity value R(sub i)(x,y). To Prevent graying of white zones in the image, the maximum of the original intensity value I(sub i)(x,y) and filtered intensity value R(sub i)(x,y) is selected for display.

  18. Methods of Improving a Digital Image Having White Zones

    NASA Technical Reports Server (NTRS)

    Wodell, Glenn A. (Inventor); Jobson, Daniel J. (Inventor); Rahman, Zia-Ur (Inventor)

    2005-01-01

    The present invention is a method of processing a digital image that is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I,(x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with SIGMA (sup N)(sub n=1)W(sub n)(log I(sub i)(x,y)-log[I(sub i)(x,y)*F(sub n)(x,y)]), i = 1,...,S where W(sub n) is a weighting factor, "*" is the convolution operator and S is the total number of unique spectral bands. For each n, the function F(sub n)(x,y) is a unique surround function applied to each position (x,y) and N is the total number of unique surround functions. Each unique surround function is scaled to improve some aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band of the image is then filtered with a filter function to generate a filtered intensity value R(sub i)(x,y). To Prevent graying of white zones in the image, the maximum of the original intensity value I(sub i)(x,y) and filtered intensity value R(sub i)(x,y) is selected for display.

  19. A method for normalizing pathology images to improve feature extraction for quantitative pathology.

    PubMed

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  20. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    SciTech Connect

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  1. A grayscale image color transfer method based on region texture analysis using GLCM

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanmeng; Wang, Lingxue; Jin, Weiqi; Luo, Yuan; Li, Jiakun

    2011-08-01

    In order to improve the performance of grayscale image colorization based on color transfer, this paper proposes a novel method by which pixels are matched accurately between images through region texture analysis using Gray Level Co-occurrence Matrix (GLCM). This method consists of six steps: reference image selection, color space transformation, grayscale linear transformation and compression, texture analysis using GLCM, pixel matching through texture value comparison, and color value transfer between pixels. We applied this method to kinds of grayscale images, and they gained natural color appearance like the reference images. Experimental results proved that this method is more effective than conventional method in accurately transferring color to grayscale images.

  2. An improved viscous characteristics analysis program

    NASA Technical Reports Server (NTRS)

    Jenkins, R. V.

    1978-01-01

    An improved two dimensional characteristics analysis program is presented. The program is built upon the foundation of a FORTRAN program entitled Analysis of Supersonic Combustion Flow Fields With Embedded Subsonic Regions. The major improvements are described and a listing of the new program is provided. The subroutines and their functions are given as well as the input required for the program. Several applications of the program to real problems are qualitatively described. Three runs obtained in the investigation of a real problem are presented to provide insight for the input and output of the program.

  3. Improving sensitivity in nonlinear Raman microspectroscopy imaging and sensing

    PubMed Central

    Arora, Rajan; Petrov, Georgi I.; Liu, Jian; Yakovlev, Vladislav V.

    2011-01-01

    Nonlinear Raman microspectroscopy based on a broadband coherent anti-Stokes Raman scattering is an emerging technique for noninvasive, chemically specific, microscopic analysis of tissues and large population of cells and particles. The sensitivity of this imaging is a critical aspect of a number of the proposed biomedical application. It is shown that the incident laser power is the major parameter controlling this sensitivity. By careful optimizing the laser system, the high-quality vibrational spectra acquisition at the multi-kHz rate becomes feasible. PMID:21361677

  4. Poster - Thur Eve - 39: SBRT imaging analysis - patient results and QA of imaging systems.

    PubMed

    Mason, D; Neath, C

    2012-07-01

    Our centre began offering stereotactic body radiation therapy (SBRT) treatments for peripheral lung lesions in 2011. As a high-precision technique, SBRT requires precise positioning of the target, and precise quality assurance (QA) of the imaging systems; these may be dependent on local equipment and procedures. We aimed to maintain target position within 3 mm throughout each treatment, and imaging and mechanical systems to at least 2 mm accuracy. A retrospective analysis was done of patient cone-beam (CB) data, and of our imaging system QA, to assess our spatial objectives and look for opportunities for improvement. The data indicated that, using our immobilization and imaging procedures, target position was maintained within 3 mm 96% of the time, and 75% within 2 mm, similar to results from other centres. Imaging system QA using the standard ball-bearing test showed system accuracy was maintained well within 1 mm. These results were compared with a simpler daily QA procedure using a Pentaguide phantom. The mean and standard deviation of the radial difference in the kV-MV isocenter coincidence for the two techniques was 0.62mm +/- 0.23mm. With appropriate choice of tolerance and action level, the morning QA was sufficient for identifying outliers requiring further investigation. This analysis gives us confidence in understanding the performance of our SBRT lung treatments, and gives baselines for analyzing changes to patient immobilization or imaging procedures. © 2012 American Association of Physicists in Medicine.

  5. Improving transient analysis technology for aircraft structures

    NASA Technical Reports Server (NTRS)

    Melosh, R. J.; Chargin, Mladen

    1989-01-01

    Aircraft dynamic analyses are demanding of computer simulation capabilities. The modeling complexities of semi-monocoque construction, irregular geometry, high-performance materials, and high-accuracy analysis are present. At issue are the safety of the passengers and the integrity of the structure for a wide variety of flight-operating and emergency conditions. The technology which supports engineering of aircraft structures using computer simulation is examined. Available computer support is briefly described and improvement of accuracy and efficiency are recommended. Improved accuracy of simulation will lead to a more economical structure. Improved efficiency will result in lowering development time and expense.

  6. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  7. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  8. Automated retinal image analysis over the internet.

    PubMed

    Tsai, Chia-Ling; Madore, Benjamin; Leotta, Matthew J; Sofka, Michal; Yang, Gehua; Majerovics, Anna; Tanenbaum, Howard L; Stewart, Charles V; Roysam, Badrinath

    2008-07-01

    Retinal clinicians and researchers make extensive use of images, and the current emphasis is on digital imaging of the retinal fundus. The goal of this paper is to introduce a system, known as retinal image vessel extraction and registration system, which provides the community of retinal clinicians, researchers, and study directors an integrated suite of advanced digital retinal image analysis tools over the Internet. The capabilities include vasculature tracing and morphometry, joint (simultaneous) montaging of multiple retinal fields, cross-modality registration (color/red-free fundus photographs and fluorescein angiograms), and generation of flicker animations for visualization of changes from longitudinal image sequences. Each capability has been carefully validated in our previous research work. The integrated Internet-based system can enable significant advances in retina-related clinical diagnosis, visualization of the complete fundus at full resolution from multiple low-angle views, analysis of longitudinal changes, research on the retinal vasculature, and objective, quantitative computer-assisted scoring of clinical trials imagery. It could pave the way for future screening services from optometry facilities.

  9. Improving depth resolution in digital breast tomosynthesis by iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Roth, Erin G.; Kraemer, David N.; Sidky, Emil Y.; Reiser, Ingrid S.; Pan, Xiaochuan

    2015-03-01

    Digital breast tomosynthesis (DBT) is currently enjoying tremendous growth in its application to screening for breast cancer. This is because it addresses a major weakness of mammographic projection imaging; namely, a cancer can be hidden by overlapping fibroglandular tissue structures or the same normal structures can mimic a malignant mass. DBT addresses these issues by acquiring few projections over a limited angle scanning arc that provides some depth resolution. As DBT is a relatively new device, there is potential to improve its performance significantly with improved image reconstruction algorithms. Previously, we reported a variation of adaptive steepest descent - projection onto convex sets (ASD-POCS) for DBT, which employed a finite differencing filter to enhance edges for improving visibility of tissue structures and to allow for volume-of-interest reconstruction. In the present work we present a singular value decomposition (SVD) analysis to demonstrate the gain in depth resolution for DBT afforded by use of the finite differencing filter.

  10. Development of a Reference Image Collection Library for Histopathology Image Processing, Analysis and Decision Support Systems Research.

    PubMed

    Kostopoulos, Spiros; Ravazoula, Panagiota; Asvestas, Pantelis; Kalatzis, Ioannis; Xenogiannopoulos, George; Cavouras, Dionisis; Glotsos, Dimitris

    2017-06-01

    Histopathology image processing, analysis and computer-aided diagnosis have been shown as effective assisting tools towards reliable and intra-/inter-observer invariant decisions in traditional pathology. Especially for cancer patients, decisions need to be as accurate as possible in order to increase the probability of optimal treatment planning. In this study, we propose a new image collection library (H