Sample records for image analysis applied

  1. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  2. Error analysis of filtering operations in pixel-duplicated images of diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2010-08-01

    In this paper, diabetic retinopathy is chosen for a sample target image to demonstrate the effectiveness of image enlargement through pixel duplication in identifying regions of interest. Pixel duplication is presented as a simpler alternative to data interpolation techniques for detecting small structures in the images. A comparative analysis is performed on different image processing schemes applied to both original and pixel-duplicated images. Structures of interest are detected and and classification parameters optimized for minimum false positive detection in the original and enlarged retinal pictures. The error analysis demonstrates the advantages as well as shortcomings of pixel duplication in image enhancement when spatial averaging operations (smoothing filters) are also applied.

  3. New developments of X-ray fluorescence imaging techniques in laboratory

    NASA Astrophysics Data System (ADS)

    Tsuji, Kouichi; Matsuno, Tsuyoshi; Takimoto, Yuki; Yamanashi, Masaki; Kometani, Noritsugu; Sasaki, Yuji C.; Hasegawa, Takeshi; Kato, Shuichi; Yamada, Takashi; Shoji, Takashi; Kawahara, Naoki

    2015-11-01

    X-ray fluorescence (XRF) analysis is a well-established analytical technique with a long research history. Many applications have been reported in various fields, such as in the environmental, archeological, biological, and forensic sciences as well as in industry. This is because XRF has a unique advantage of being a nondestructive analytical tool with good precision for quantitative analysis. Recent advances in XRF analysis have been realized by the development of new x-ray optics and x-ray detectors. Advanced x-ray focusing optics enables the making of a micro x-ray beam, leading to micro-XRF analysis and XRF imaging. A confocal micro-XRF technique has been applied for the visualization of elemental distributions inside the samples. This technique was applied for liquid samples and for monitoring chemical reactions such as the metal corrosion of steel samples in the NaCl solutions. In addition, a principal component analysis was applied for reducing the background intensity in XRF spectra obtained during XRF mapping, leading to improved spatial resolution of confocal micro-XRF images. In parallel, the authors have proposed a wavelength dispersive XRF (WD-XRF) imaging spectrometer for a fast elemental imaging. A new two dimensional x-ray detector, the Pilatus detector was applied for WD-XRF imaging. Fast XRF imaging in 1 s or even less was demonstrated for Euro coins and industrial samples. In this review paper, these recent advances in laboratory-based XRF imaging, especially in a laboratory setting, will be introduced.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arimura, Hidetaka, E-mail: arimurah@med.kyushu-u.ac.jp; Kamezawa, Hidemi; Jin, Ze

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  5. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  6. Computer assisted analysis of auroral images obtained from high altitude polar satellites

    NASA Technical Reports Server (NTRS)

    Samadani, Ramin; Flynn, Michael

    1993-01-01

    Automatic techniques that allow the extraction of physically significant parameters from auroral images were developed. This allows the processing of a much larger number of images than is currently possible with manual techniques. Our techniques were applied to diverse auroral image datasets. These results were made available to geophysicists at NASA and at universities in the form of a software system that performs the analysis. After some feedback from users, an upgraded system was transferred to NASA and to two universities. The feasibility of user-trained search and retrieval of large amounts of data using our automatically derived parameter indices was demonstrated. Techniques based on classification and regression trees (CART) were developed and applied to broaden the types of images to which the automated search and retrieval may be applied. Our techniques were tested with DE-1 auroral images.

  7. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  8. Application of principal component analysis for improvement of X-ray fluorescence images obtained by polycapillary-based micro-XRF technique

    NASA Astrophysics Data System (ADS)

    Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.

    2017-07-01

    Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.

  9. Network analysis of mesoscale optical recordings to assess regional, functional connectivity.

    PubMed

    Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H

    2015-10-01

    With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.

  10. Automated three-dimensional quantification of myocardial perfusion and brain SPECT.

    PubMed

    Slomka, P J; Radau, P; Hurwitz, G A; Dey, D

    2001-01-01

    To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.

  11. Research of second harmonic generation images based on texture analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  12. Scanning electron microscopy combined with image processing technique: Analysis of microstructure, texture and tenderness in Semitendinous and Gluteus Medius bovine muscles.

    PubMed

    Pieniazek, Facundo; Messina, Valeria

    2016-11-01

    In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p < 0.05) were obtained for image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  13. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  14. Snapshot hyperspectral imaging probe with principal component analysis and confidence ellipse for classification

    NASA Astrophysics Data System (ADS)

    Lim, Hoong-Ta; Murukeshan, Vadakke Matham

    2017-06-01

    Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.

  15. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    PubMed

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

  16. An explorative chemometric approach applied to hyperspectral images for the study of illuminated manuscripts

    NASA Astrophysics Data System (ADS)

    Catelli, Emilio; Randeberg, Lise Lyngsnes; Alsberg, Bjørn Kåre; Gebremariam, Kidane Fanta; Bracci, Silvano

    2017-04-01

    Hyperspectral imaging (HSI) is a fast non-invasive imaging technology recently applied in the field of art conservation. With the help of chemometrics, important information about the spectral properties and spatial distribution of pigments can be extracted from HSI data. With the intent of expanding the applications of chemometrics to the interpretation of hyperspectral images of historical documents, and, at the same time, to study the colorants and their spatial distribution on ancient illuminated manuscripts, an explorative chemometric approach is here presented. The method makes use of chemometric tools for spectral de-noising (minimum noise fraction (MNF)) and image analysis (multivariate image analysis (MIA) and iterative key set factor analysis (IKSFA)/spectral angle mapper (SAM)) which have given an efficient separation, classification and mapping of colorants from visible-near-infrared (VNIR) hyperspectral images of an ancient illuminated fragment. The identification of colorants was achieved by extracting and interpreting the VNIR spectra as well as by using a portable X-ray fluorescence (XRF) spectrometer.

  17. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images.

  18. Quantification of applied dose in irradiated citrus fruits by DNA Comet Assay together with image analysis.

    PubMed

    Cetinkaya, Nurcan; Ercin, Demet; Özvatan, Sümer; Erel, Yakup

    2016-02-01

    The experiments were conducted for quantification of applied dose for quarantine control in irradiated citrus fruits. Citrus fruits exposed to doses of 0.1 to 1.5 kGy and analyzed by DNA Comet Assay. Observed comets were evaluated by image analysis. The tail length, tail moment and tail DNA% of comets were used for the interpretation of comets. Irradiated citrus fruits showed the separated tails from the head of the comet by increasing applied doses from 0.1 to 1.5 kGy. The mean tail length and mean tail moment% levels of irradiated citrus fruits at all doses are significantly different (p < 0.01) from control even for the lowest dose at 0.1 kGy. Thus, DNA Comet Assay may be a practical quarantine control method for irradiated citrus fruits since it has been possible to estimate the applied low doses as small as 0.1 kGy when it is combined with image analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  20. Optical Fourier diffractometry applied to degraded bone structure recognition

    NASA Astrophysics Data System (ADS)

    Galas, Jacek; Godwod, Krzysztof; Szawdyn, Jacek; Sawicki, Andrzej

    1993-09-01

    Image processing and recognition methods are useful in many fields. This paper presents the hybrid optical and digital method applied to recognition of pathological changes in bones involved by metabolic bone diseases. The trabecular bone structure, registered by x ray on the photographic film, is analyzed in the new type of computer controlled diffractometer. The set of image parameters, extracted from diffractogram, is evaluated by statistical analysis. The synthetic image descriptors in discriminant space, constructed on the base of 3 training groups of images (control, osteoporosis, and osteomalacia groups) by discriminant analysis, allow us to recognize bone samples with degraded bone structure and to recognize the disease. About 89% of the images were classified correctly. This method after optimization process will be verified in medical investigations.

  1. An open data mining framework for the analysis of medical images: application on obstructive nephropathy microscopy images.

    PubMed

    Doukas, Charalampos; Goudas, Theodosis; Fischer, Simon; Mierswa, Ingo; Chatziioannou, Aristotle; Maglogiannis, Ilias

    2010-01-01

    This paper presents an open image-mining framework that provides access to tools and methods for the characterization of medical images. Several image processing and feature extraction operators have been implemented and exposed through Web Services. Rapid-Miner, an open source data mining system has been utilized for applying classification operators and creating the essential processing workflows. The proposed framework has been applied for the detection of salient objects in Obstructive Nephropathy microscopy images. Initial classification results are quite promising demonstrating the feasibility of automated characterization of kidney biopsy images.

  2. Rotation covariant image processing for biomedical applications.

    PubMed

    Skibbe, Henrik; Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.

  3. TOF-SIMS imaging technique with information entropy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Satoka; Kawashima, Y.; Kudo, Masahiro

    2005-05-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is capable of chemical imaging of proteins on insulated samples in principal. However, selection of specific peaks related to a particular protein, which are necessary for chemical imaging, out of numerous candidates had been difficult without an appropriate spectrum analysis technique. Therefore multivariate analysis techniques, such as principal component analysis (PCA), and analysis with mutual information defined by information theory, have been applied to interpret SIMS spectra of protein samples. In this study mutual information was applied to select specific peaks related to proteins in order to obtain chemical images. Proteins on insulated materials were measured with TOF-SIMS and then SIMS spectra were analyzed by means of the analysis method based on the comparison using mutual information. Chemical mapping of each protein was obtained using specific peaks related to each protein selected based on values of mutual information. The results of TOF-SIMS images of proteins on the materials provide some useful information on properties of protein adsorption, optimality of immobilization processes and reaction between proteins. Thus chemical images of proteins by TOF-SIMS contribute to understand interactions between material surfaces and proteins and to develop sophisticated biomaterials.

  4. Mapping and monitoring changes in vegetation communities of Jasper Ridge, CA, using spectral fractions derived from AVIRIS images

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Roberts, Dar A.; Adams, John B.; Smith, Milton O.

    1993-01-01

    An important application of remote sensing is to map and monitor changes over large areas of the land surface. This is particularly significant with the current interest in monitoring vegetation communities. Most of traditional methods for mapping different types of plant communities are based upon statistical classification techniques (i.e., parallel piped, nearest-neighbor, etc.) applied to uncalibrated multispectral data. Classes from these techniques are typically difficult to interpret (particularly to a field ecologist/botanist). Also, classes derived for one image can be very different from those derived from another image of the same area, making interpretation of observed temporal changes nearly impossible. More recently, neural networks have been applied to classification. Neural network classification, based upon spectral matching, is weak in dealing with spectral mixtures (a condition prevalent in images of natural surfaces). Another approach to mapping vegetation communities is based on spectral mixture analysis, which can provide a consistent framework for image interpretation. Roberts et al. (1990) mapped vegetation using the band residuals from a simple mixing model (the same spectral endmembers applied to all image pixels). Sabol et al. (1992b) and Roberts et al. (1992) used different methods to apply the most appropriate spectral endmembers to each image pixel, thereby allowing mapping of vegetation based upon the the different endmember spectra. In this paper, we describe a new approach to classification of vegetation communities based upon the spectra fractions derived from spectral mixture analysis. This approach was applied to three 1992 AVIRIS images of Jasper Ridge, California to observe seasonal changes in surface composition.

  5. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  6. Relevant Scatterers Characterization in SAR Images

    NASA Astrophysics Data System (ADS)

    Chaabouni, Houda; Datcu, Mihai

    2006-11-01

    Recognizing scenes in a single look meter resolution Synthetic Aperture Radar (SAR) images, requires the capability to identify relevant signal signatures in condition of variable image acquisition geometry, arbitrary objects poses and configurations. Among the methods to detect relevant scatterers in SAR images, we can mention the internal coherence. The SAR spectrum splitted in azimuth generates a series of images which preserve high coherence only for particular object scattering. The detection of relevant scatterers can be done by correlation study or Independent Component Analysis (ICA) methods. The present article deals with the state of the art for SAR internal correlation analysis and proposes further extensions using elements of inference based on information theory applied to complex valued signals. The set of azimuth looks images is analyzed using mutual information measures and an equivalent channel capacity is derived. The localization of the "target" requires analysis in a small image window, thus resulting in imprecise estimation of the second order statistics of the signal. For a better precision, a Hausdorff measure is introduced. The method is applied to detect and characterize relevant objects in urban areas.

  7. Advanced imaging techniques in brain tumors

    PubMed Central

    2009-01-01

    Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287

  8. Analysis of airborne MAIS imaging spectrometric data for mineral exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jinnian; Zheng Lanfen; Tong Qingxi

    1996-11-01

    The high spectral resolution imaging spectrometric system made quantitative analysis and mapping of surface composition possible. The key issue will be the quantitative approach for analysis of surface parameters for imaging spectrometer data. This paper describes the methods and the stages of quantitative analysis. (1) Extracting surface reflectance from imaging spectrometer image. Lab. and inflight field measurements are conducted for calibration of imaging spectrometer data, and the atmospheric correction has also been used to obtain ground reflectance by using empirical line method and radiation transfer modeling. (2) Determining quantitative relationship between absorption band parameters from the imaging spectrometer data andmore » chemical composition of minerals. (3) Spectral comparison between the spectra of spectral library and the spectra derived from the imagery. The wavelet analysis-based spectrum-matching techniques for quantitative analysis of imaging spectrometer data has beer, developed. Airborne MAIS imaging spectrometer data were used for analysis and the analysis results have been applied to the mineral and petroleum exploration in Tarim Basin area china. 8 refs., 8 figs.« less

  9. Effective use of principal component analysis with high resolution remote sensing data to delineate hydrothermal alteration and carbonate rocks

    NASA Technical Reports Server (NTRS)

    Feldman, Sandra C.

    1987-01-01

    Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.

  10. Rotation Covariant Image Processing for Biomedical Applications

    PubMed Central

    Reisert, Marco

    2013-01-01

    With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences. PMID:23710255

  11. Political leaders and the media. Can we measure political leadership images in newspapers using computer-assisted content analysis?

    PubMed

    Aaldering, Loes; Vliegenthart, Rens

    Despite the large amount of research into both media coverage of politics as well as political leadership, surprisingly little research has been devoted to the ways political leaders are discussed in the media. This paper studies whether computer-aided content analysis can be applied in examining political leadership images in Dutch newspaper articles. It, firstly, provides a conceptualization of political leader character traits that integrates different perspectives in the literature. Moreover, this paper measures twelve political leadership images in media coverage, based on a large-scale computer-assisted content analysis of Dutch media coverage (including almost 150.000 newspaper articles), and systematically tests the quality of the employed measurement instrument by assessing the relationship between the images, the variance in the measurement, the over-time development of images for two party leaders and by comparing the computer results with manual coding. We conclude that the computerized content analysis provides a valid measurement for the leadership images in Dutch newspapers. Moreover, we find that the dimensions political craftsmanship, vigorousness, integrity, communicative performances and consistency are regularly applied in discussing party leaders, but that portrayal of party leaders in terms of responsiveness is almost completely absent in Dutch newspapers.

  12. MRI-compatible pipeline for three-dimensional MALDI imaging mass spectrometry using PAXgene fixation.

    PubMed

    Oetjen, Janina; Aichler, Michaela; Trede, Dennis; Strehlow, Jan; Berger, Judith; Heldmann, Stefan; Becker, Michael; Gottschalk, Michael; Kobarg, Jan Hendrik; Wirtz, Stefan; Schiffler, Stefan; Thiele, Herbert; Walch, Axel; Maass, Peter; Alexandrov, Theodore

    2013-09-02

    MALDI imaging mass spectrometry (MALDI-imaging) has emerged as a spatially-resolved label-free bioanalytical technique for direct analysis of biological samples and was recently introduced for analysis of 3D tissue specimens. We present a new experimental and computational pipeline for molecular analysis of tissue specimens which integrates 3D MALDI-imaging, magnetic resonance imaging (MRI), and histological staining and microscopy, and evaluate the pipeline by applying it to analysis of a mouse kidney. To ensure sample integrity and reproducible sectioning, we utilized the PAXgene fixation and paraffin embedding and proved its compatibility with MRI. Altogether, 122 serial sections of the kidney were analyzed using MALDI-imaging, resulting in a 3D dataset of 200GB comprised of 2million spectra. We show that elastic image registration better compensates for local distortions of tissue sections. The computational analysis of 3D MALDI-imaging data was performed using our spatial segmentation pipeline which determines regions of distinct molecular composition and finds m/z-values co-localized with these regions. For facilitated interpretation of 3D distribution of ions, we evaluated isosurfaces providing simplified visualization. We present the data in a multimodal fashion combining 3D MALDI-imaging with the MRI volume rendering and with light microscopic images of histologically stained sections. Our novel experimental and computational pipeline for 3D MALDI-imaging can be applied to address clinical questions such as proteomic analysis of the tumor morphologic heterogeneity. Examining the protein distribution as well as the drug distribution throughout an entire tumor using our pipeline will facilitate understanding of the molecular mechanisms of carcinogenesis. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Video image processing to create a speed sensor

    DOT National Transportation Integrated Search

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  14. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. White blood cell counting analysis of blood smear images using various segmentation strategies

    NASA Astrophysics Data System (ADS)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  16. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  18. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer's disease.

    PubMed

    Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong

    2016-07-01

    Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).

  19. Multispectral medical image fusion in Contourlet domain for computer based diagnosis of Alzheimer’s disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja

    Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less

  20. Blind image quality assessment via probabilistic latent semantic analysis.

    PubMed

    Yang, Xichen; Sun, Quansen; Wang, Tianshu

    2016-01-01

    We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.

  1. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  2. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  3. Dental application of novel finite element analysis software for three-dimensional finite element modeling of a dentulous mandible from its computed tomography images.

    PubMed

    Nakamura, Keiko; Tajima, Kiyoshi; Chen, Ker-Kong; Nagamatsu, Yuki; Kakigawa, Hiroshi; Masumi, Shin-ich

    2013-12-01

    This study focused on the application of novel finite-element analysis software for constructing a finite-element model from the computed tomography data of a human dentulous mandible. The finite-element model is necessary for evaluating the mechanical response of the alveolar part of the mandible, resulting from occlusal force applied to the teeth during biting. Commercially available patient-specific general computed tomography-based finite-element analysis software was solely applied to the finite-element analysis for the extraction of computed tomography data. The mandibular bone with teeth was extracted from the original images. Both the enamel and the dentin were extracted after image processing, and the periodontal ligament was created from the segmented dentin. The constructed finite-element model was reasonably accurate using a total of 234,644 nodes and 1,268,784 tetrahedral and 40,665 shell elements. The elastic moduli of the heterogeneous mandibular bone were determined from the bone density data of the computed tomography images. The results suggested that the software applied in this study is both useful and powerful for creating a more accurate three-dimensional finite-element model of a dentulous mandible from the computed tomography data without the need for any other software.

  4. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  5. Hyperspectral imaging and multivariate analysis in the dried blood spots investigations

    NASA Astrophysics Data System (ADS)

    Majda, Alicja; Wietecha-Posłuszny, Renata; Mendys, Agata; Wójtowicz, Anna; Łydżba-Kopczyńska, Barbara

    2018-04-01

    The aim of this study was to apply a new methodology using the combination of the hyperspectral imaging and the dry blood spot (DBS) collecting. Application of the hyperspectral imaging is fast and non-destructive. DBS method offers the advantage also on the micro-invasive blood collecting and low volume of required sample. During experimental step, the reflected light was recorded by two hyperspectral systems. The collection of 776 spectral bands in the VIS-NIR range (400-1000 nm) and 256 spectral bands in the SWIR range (970-2500 nm) was applied. Pixel has the size of 8 × 8 and 30 × 30 µm for VIS-NIR and SWIR camera, respectively. The obtained data in the form of hyperspectral cubes were treated with chemometric methods, i.e., minimum noise fraction and principal component analysis. It has been shown that the application of these methods on this type of data, by analyzing the scatter plots, allows a rapid analysis of the homogeneity of DBS, and the selection of representative areas for further analysis. It also gives the possibility of tracking the dynamics of changes occurring in biological traces applied on the surface. For the analyzed 28 blood samples, described method allowed to distinguish those blood stains because of time of apply.

  6. OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian

    2018-03-01

    OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.

  7. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  8. FUNSTAT and statistical image representations

    NASA Technical Reports Server (NTRS)

    Parzen, E.

    1983-01-01

    General ideas of functional statistical inference analysis of one sample and two samples, univariate and bivariate are outlined. ONESAM program is applied to analyze the univariate probability distributions of multi-spectral image data.

  9. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Hierarchical classification strategy for Phenotype extraction from epidermal growth factor receptor endocytosis screening.

    PubMed

    Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J

    2016-05-03

    Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.

  11. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  12. Deep learning of symmetrical discrepancies for computer-aided detection of mammographic masses

    NASA Astrophysics Data System (ADS)

    Kooi, Thijs; Karssemeijer, Nico

    2017-03-01

    When humans identify objects in images, context is an important cue; a cheetah is more likely to be a domestic cat when a television set is recognised in the background. Similar principles apply to the analysis of medical images. The detection of diseases that manifest unilaterally in symmetrical organs or organ pairs can in part be facilitated by a search for symmetrical discrepancies in or between the organs in question. During a mammographic exam, images are recorded of each breast and absence of a certain structure around the same location in the contralateral image will render the area under scrutiny more suspicious and conversely, the presence of similar tissue less so. In this paper, we present a fusion scheme for a deep Convolutional Neural Network (CNN) architecture with the goal to optimally capture such asymmetries. The method is applied to the domain of mammography CAD, but can be relevant to other medical image analysis tasks where symmetry is important such as lung, prostate or brain images.

  13. An Analysis of the Magneto-Optic Imaging System

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar

    1996-01-01

    The Magneto-Optic Imaging system is being used for the detection of defects in airframes and other aircraft structures. The system has been successfully applied to detecting surface cracks, but has difficulty in the detection of sub-surface defects such as corrosion. The intent of the grant was to understand the physics of the MOI better, in order to use it effectively for detecting corrosion and for classifying surface defects. Finite element analysis, image classification, and image processing are addressed.

  14. Velocity filtering applied to optical flow calculations

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1990-01-01

    Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.

  15. Quantification of protein expression in cells and cellular subcompartments on immunohistochemical sections using a computer supported image analysis system.

    PubMed

    Braun, Martin; Kirsten, Robert; Rupp, Niels J; Moch, Holger; Fend, Falko; Wernert, Nicolas; Kristiansen, Glen; Perner, Sven

    2013-05-01

    Quantification of protein expression based on immunohistochemistry (IHC) is an important step for translational research and clinical routine. Several manual ('eyeballing') scoring systems are used in order to semi-quantify protein expression based on chromogenic intensities and distribution patterns. However, manual scoring systems are time-consuming and subject to significant intra- and interobserver variability. The aim of our study was to explore, whether new image analysis software proves to be sufficient as an alternative tool to quantify protein expression. For IHC experiments, one nucleus specific marker (i.e., ERG antibody), one cytoplasmic specific marker (i.e., SLC45A3 antibody), and one marker expressed in both compartments (i.e., TMPRSS2 antibody) were chosen. Stainings were applied on TMAs, containing tumor material of 630 prostate cancer patients. A pathologist visually quantified all IHC stainings in a blinded manner, applying a four-step scoring system. For digital quantification, image analysis software (Tissue Studio v.2.1, Definiens AG, Munich, Germany) was applied to obtain a continuous spectrum of average staining intensity. For each of the three antibodies we found a strong correlation of the manual protein expression score and the score of the image analysis software. Spearman's rank correlation coefficient was 0.94, 0.92, and 0.90 for ERG, SLC45A3, and TMPRSS2, respectively (p⟨0.01). Our data suggest that the image analysis software Tissue Studio is a powerful tool for quantification of protein expression in IHC stainings. Further, since the digital analysis is precise and reproducible, computer supported protein quantification might help to overcome intra- and interobserver variability and increase objectivity of IHC based protein assessment.

  16. Automated MicroSPECT/MicroCT Image Analysis of the Mouse Thyroid Gland.

    PubMed

    Cheng, Peng; Hollingsworth, Brynn; Scarberry, Daniel; Shen, Daniel H; Powell, Kimerly; Smart, Sean C; Beech, John; Sheng, Xiaochao; Kirschner, Lawrence S; Menq, Chia-Hsiang; Jhiang, Sissy M

    2017-11-01

    The ability of thyroid follicular cells to take up iodine enables the use of radioactive iodine (RAI) for imaging and targeted killing of RAI-avid thyroid cancer following thyroidectomy. To facilitate identifying novel strategies to improve 131 I therapeutic efficacy for patients with RAI refractory disease, it is desired to optimize image acquisition and analysis for preclinical mouse models of thyroid cancer. A customized mouse cradle was designed and used for microSPECT/CT image acquisition at 1 hour (t1) and 24 hours (t24) post injection of 123 I, which mainly reflect RAI influx/efflux equilibrium and RAI retention in the thyroid, respectively. FVB/N mice with normal thyroid glands and TgBRAF V600E mice with thyroid tumors were imaged. In-house CTViewer software was developed to streamline image analysis with new capabilities, along with display of 3D voxel-based 123 I gamma photon intensity in MATLAB. The customized mouse cradle facilitates consistent tissue configuration among image acquisitions such that rigid body registration can be applied to align serial images of the same mouse via the in-house CTViewer software. CTViewer is designed specifically to streamline SPECT/CT image analysis with functions tailored to quantify thyroid radioiodine uptake. Automatic segmentation of thyroid volumes of interest (VOI) from adjacent salivary glands in t1 images is enabled by superimposing the thyroid VOI from the t24 image onto the corresponding aligned t1 image. The extent of heterogeneity in 123 I accumulation within thyroid VOIs can be visualized by 3D display of voxel-based 123 I gamma photon intensity. MicroSPECT/CT image acquisition and analysis for thyroidal RAI uptake is greatly improved by the cradle and the CTViewer software, respectively. Furthermore, the approach of superimposing thyroid VOIs from t24 images to select thyroid VOIs on corresponding aligned t1 images can be applied to studies in which the target tissue has differential radiotracer retention from surrounding tissues.

  17. Quantitative analyses for elucidating mechanisms of cell fate commitment in the mouse blastocyst

    NASA Astrophysics Data System (ADS)

    Saiz, Néstor; Kang, Minjung; Puliafito, Alberto; Schrode, Nadine; Xenopoulos, Panagiotis; Lou, Xinghua; Di Talia, Stefano; Hadjantonakis, Anna-Katerina

    2015-03-01

    In recent years we have witnessed a shift from qualitative image analysis towards higher resolution, quantitative analyses of imaging data in developmental biology. This shift has been fueled by technological advances in both imaging and analysis software. We have recently developed a tool for accurate, semi-automated nuclear segmentation of imaging data from early mouse embryos and embryonic stem cells. We have applied this software to the study of the first lineage decisions that take place during mouse development and established analysis pipelines for both static and time-lapse imaging experiments. In this paper we summarize the conclusions from these studies to illustrate how quantitative, single-cell level analysis of imaging data can unveil biological processes that cannot be revealed by traditional qualitative studies.

  18. Korean coastal water depth/sediment and land cover mapping (1:25,000) by computer analysis of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Park, K. Y.; Miller, L. D.

    1978-01-01

    Computer analysis was applied to single date LANDSAT MSS imagery of a sample coastal area near Seoul, Korea equivalent to a 1:50,000 topographic map. Supervised image processing yielded a test classification map from this sample image containing 12 classes: 5 water depth/sediment classes, 2 shoreline/tidal classes, and 5 coastal land cover classes at a scale of 1:25,000 and with a training set accuracy of 76%. Unsupervised image classification was applied to a subportion of the site analyzed and produced classification maps comparable in results in a spatial sense. The results of this test indicated that it is feasible to produce such quantitative maps for detailed study of dynamic coastal processes given a LANDSAT image data base at sufficiently frequent time intervals.

  19. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  20. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  1. Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.

    PubMed

    Stankov, L

    1979-07-01

    The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

  2. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  3. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    PubMed Central

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  4. A survey of MRI-based medical image analysis for brain tumor studies

    NASA Astrophysics Data System (ADS)

    Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio

    2013-07-01

    MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.

  5. Analysis of identification of digital images from a map of cosmic microwaves

    NASA Astrophysics Data System (ADS)

    Skeivalas, J.; Turla, V.; Jurevicius, M.; Viselga, G.

    2018-04-01

    This paper discusses identification of digital images from the cosmic microwave background radiation map formed according to the data of the European Space Agency "Planck" telescope by applying covariance functions and wavelet theory. The estimates of covariance functions of two digital images or single images are calculated according to the random functions formed of the digital images in the form of pixel vectors. The estimates of pixel vectors are formed on expansion of the pixel arrays of the digital images by a single vector. When the scale of a digital image is varied, the frequencies of single-pixel color waves remain constant and the procedure for calculation of covariance functions is not affected. For identification of the images, the RGB format spectrum has been applied. The impact of RGB spectrum components and the color tensor on the estimates of covariance functions was analyzed. The identity of digital images is assessed according to the changes in the values of the correlation coefficients in a certain range of values by applying the developed computer program.

  6. Influence of Images on the Evaluation of Jams Using Conjoint Analysis Combined with Check-All-That-Apply (CATA) Questions.

    PubMed

    Miraballes, Marcelo; Gámbaro, Adriana

    2018-01-01

    A study of the influence of the use of images in a conjoint analysis combined with check-all-that apply (CATA) questions on jams was carried out. The relative importance of flavor and the information presented in the label in the willingness to purchase and the perception of how healthy the product is has been evaluated. Sixty consumers evaluated the stimuli presented only in text format (session 1), and another group of 60 consumers did so by receiving the stimuli in text format along with an image of the product (session 2). In addition, for each stimulus, consumers answered a CATA question consisting of 20 terms related to their involvement with the product. The perception of healthy increased when the texts were accompanied with images and also increased when the text included information. Willingness to purchase was only influenced by the flavor of the jams. The presence of images did not influence the CATA question's choice of terms, which were influenced by the information presented in the text. The use of a check-all-that-apply question in concepts provided an interesting possibility when they were combined with the results from the conjoint analysis, improving the comprehension of consumers' perception. Using CATA questions as an alternative way of evaluating consumer involvement seems to be beneficial and should be evaluated much further. © 2017 Institute of Food Technologists®.

  7. Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion

    NASA Astrophysics Data System (ADS)

    Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar

    2014-06-01

    In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.

  8. Preprocessing of 2-Dimensional Gel Electrophoresis Images Applied to Proteomic Analysis: A Review.

    PubMed

    Goez, Manuel Mauricio; Torres-Madroñero, Maria Constanza; Röthlisberger, Sarah; Delgado-Trejos, Edilson

    2018-02-01

    Various methods and specialized software programs are available for processing two-dimensional gel electrophoresis (2-DGE) images. However, due to the anomalies present in these images, a reliable, automated, and highly reproducible system for 2-DGE image analysis has still not been achieved. The most common anomalies found in 2-DGE images include vertical and horizontal streaking, fuzzy spots, and background noise, which greatly complicate computational analysis. In this paper, we review the preprocessing techniques applied to 2-DGE images for noise reduction, intensity normalization, and background correction. We also present a quantitative comparison of non-linear filtering techniques applied to synthetic gel images, through analyzing the performance of the filters under specific conditions. Synthetic proteins were modeled into a two-dimensional Gaussian distribution with adjustable parameters for changing the size, intensity, and degradation. Three types of noise were added to the images: Gaussian, Rayleigh, and exponential, with signal-to-noise ratios (SNRs) ranging 8-20 decibels (dB). We compared the performance of wavelet, contourlet, total variation (TV), and wavelet-total variation (WTTV) techniques using parameters SNR and spot efficiency. In terms of spot efficiency, contourlet and TV were more sensitive to noise than wavelet and WTTV. Wavelet worked the best for images with SNR ranging 10-20 dB, whereas WTTV performed better with high noise levels. Wavelet also presented the best performance with any level of Gaussian noise and low levels (20-14 dB) of Rayleigh and exponential noise in terms of SNR. Finally, the performance of the non-linear filtering techniques was evaluated using a real 2-DGE image with previously identified proteins marked. Wavelet achieved the best detection rate for the real image. Copyright © 2018 Beijing Institute of Genomics, Chinese Academy of Sciences and Genetics Society of China. Production and hosting by Elsevier B.V. All rights reserved.

  9. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  10. Terahertz analysis of an East Asian historical mural painting

    NASA Astrophysics Data System (ADS)

    Fukunaga, K.; Hosako, I.; Kohdzuma, Y.; Koezuka, T.; Kim, M.-J.; Ikari, T.; Du, X.

    2010-05-01

    Terahertz (THz) spectroscopy and THz and imaging techniques are expected to have great potential for the non-invasive analysis of artworks. We have applied THz imaging to analyse the historic mural painting of a Lamaism temple by using a transportable time-domain THz imaging system; such an attempt is the first in the world. The reflection image revealed that there are two orange colours in the painting, although they appear the same to the naked eye. THz imaging can also estimate the depth of cracks. The colours were examined by X-ray fluorescence and Raman spectroscopy, and the results were found to be in good agreement. This work proved that THz imaging can contribute to the non-invasive analysis of cultural heritage.

  11. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  12. Imaging Heterogeneity in Lung Cancer: Techniques, Applications, and Challenges.

    PubMed

    Bashir, Usman; Siddique, Muhammad Musib; Mclean, Emma; Goh, Vicky; Cook, Gary J

    2016-09-01

    Texture analysis involves the mathematic processing of medical images to derive sets of numeric quantities that measure heterogeneity. Studies on lung cancer have shown that texture analysis may have a role in characterizing tumors and predicting patient outcome. This article outlines the mathematic basis of and the most recent literature on texture analysis in lung cancer imaging. We also describe the challenges facing the clinical implementation of texture analysis. Texture analysis of lung cancer images has been applied successfully to FDG PET and CT scans. Different texture parameters have been shown to be predictive of the nature of disease and of patient outcome. In general, it appears that more heterogeneous tumors on imaging tend to be more aggressive and to be associated with poorer outcomes and that tumor heterogeneity on imaging decreases with treatment. Despite these promising results, there is a large variation in the reported data and strengths of association.

  13. Spectral mixture modeling: Further analysis of rock and soil types at the Viking Lander sites

    NASA Technical Reports Server (NTRS)

    Adams, John B.; Smith, Milton O.

    1987-01-01

    A new image processing technique was applied to Viking Lander multispectral images. Spectral endmembers were defined that included soil, rock and shade. Mixtures of these endmembers were found to account for nearly all the spectral variance in a Viking Lander image.

  14. Feasibility Analysis and Demonstration of High-Speed Digital Imaging Using Micro-Arrays of Vertical Cavity Surface-Emitting Lasers

    DTIC Science & Technology

    2011-04-01

    Proceedings, Bristol, UK (2006). 5. M. A. Mentzer, Applied Optics Fundamentals and Device Applications: Nano, MOEMS , and Biotechnology, CRC Taylor...ballistic sensing, flash x-ray cineradiography, digital image correlation, image processing al- gorithms, and applications of MOEMS to nano- and

  15. Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study

    NASA Astrophysics Data System (ADS)

    Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.

    2017-12-01

    Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.

  16. Supervised graph hashing for histopathology image retrieval and classification.

    PubMed

    Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin

    2017-12-01

    In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Budget impact of applying appropriateness criteria for myocardial perfusion scintigraphy: The perspective of a developing country.

    PubMed

    Dos Santos, Mauro Augusto; Santos, Marisa Silva; Tura, Bernardo Rangel; Félix, Renata; Brito, Adriana Soares X; De Lorenzo, Andrea

    2016-10-01

    Myocardial perfusion imaging is widely used for the risk stratification of coronary artery disease. In view of its cost, besides radiation issues, judicious evaluation of the appropriateness of its indications is essential to prevent an unnecessary economic burden on the health system. We evaluated, at a tertiary-care, public Brazilian hospital, the appropriateness of myocardial perfusion scintigraphy indications, and estimated the budget impact of applying appropriateness criteria. An observational, cross-sectional study of 190 patients with suspected or known coronary artery disease referred for myocardial perfusion imaging was conducted. The appropriateness of myocardial perfusion imaging indications was evaluated with the Appropriate Use Criteria for Cardiac Radionuclide Imaging published in 2009. Budget impact analysis was performed with a deterministic model. The prevalence of appropriate requests was 78%; of inappropriate indications, 12%; and of uncertain indications, 10%. Budget impact analysis showed that the use of appropriateness criteria, applied to the population referred to myocardial perfusion scintigraphy within 1 year, could generate savings of $ 64,252.04 dollars. The 12% inappropriate requests for myocardial perfusion scintigraphy at a tertiary-care hospital suggest that a reappraisal of MPI indications is needed. Budget impact analysis estimated resource savings of 18.6% with the establishment of appropriateness criteria for MPI.

  18. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  19. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    PubMed

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  20. Performance of an image analysis processing system for hen tracking in an environmental preference chamber.

    PubMed

    Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S

    2014-10-01

    Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.

  1. Temporal Fourier analysis applied to equilibrium radionuclide cineangiography. Importance in the study of global and regional left ventricular wall motion.

    PubMed

    Cardot, J C; Berthout, P; Verdenet, J; Bidet, A; Faivre, R; Bassand, J P; Bidet, R; Maurat, J P

    1982-01-01

    Regional and global left ventricular wall motion was assessed in 120 patients using radionuclide cineangiography (RCA) and contrast angiography. Functional imaging procedures based on a temporal Fourier analysis of dynamic image sequences were applied to the study of cardiac contractility. Two images were constructed by taking the phase and amplitude values of the first harmonic in the Fourier transform for each pixel. These two images aided in determining the perimeter of the left ventricle to calculate the global ejection fraction. Regional left ventricular wall motion was studied by analyzing the phase value and by examining the distribution histogram of these values. The accuracy of global ejection fraction calculation was improved by the Fourier technique. This technique increased the sensitivity of RCA for determining segmental abnormalities especially in the left anterior oblique view (LAO).

  2. Watershed identification of polygonal patterns in noisy SAR images.

    PubMed

    Moreels, Pierre; Smrekar, Suzanne E

    2003-01-01

    This paper describes a new approach to pattern recognition in synthetic aperture radar (SAR) images. A visual analysis of the images provided by NASA's Magellan mission to Venus has revealed a number of zones showing polygonal-shaped faults on the surface of the planet. The goal of the paper is to provide a method to automate the identification of such zones. The high level of noise in SAR images and its multiplicative nature make automated image analysis difficult and conventional edge detectors, like those based on gradient images, inefficient. We present a scheme based on an improved watershed algorithm and a two-scale analysis. The method extracts potential edges in the SAR image, analyzes the patterns obtained, and decides whether or not the image contains a "polygon area". This scheme can also be applied to other SAR or visual images, for instance in observation of Mars and Jupiter's satellite Europa.

  3. Single-Cell Analysis Using Hyperspectral Imaging Modalities.

    PubMed

    Mehta, Nishir; Shaik, Shahensha; Devireddy, Ram; Gartia, Manas Ranjan

    2018-02-01

    Almost a decade ago, hyperspectral imaging (HSI) was employed by the NASA in satellite imaging applications such as remote sensing technology. This technology has since been extensively used in the exploration of minerals, agricultural purposes, water resources, and urban development needs. Due to recent advancements in optical re-construction and imaging, HSI can now be applied down to micro- and nanometer scales possibly allowing for exquisite control and analysis of single cell to complex biological systems. This short review provides a description of the working principle of the HSI technology and how HSI can be used to assist, substitute, and validate traditional imaging technologies. This is followed by a description of the use of HSI for biological analysis and medical diagnostics with emphasis on single-cell analysis using HSI.

  4. Coastal modification of a scene employing multispectral images and vector operators.

    PubMed

    Lira, Jorge

    2017-05-01

    Changes in sea level, wind patterns, sea current patterns, and tide patterns have produced morphologic transformations in the coastline area of Tamaulipas Sate in North East Mexico. Such changes generated a modification of the coastline and variations of the texture-relief and texture of the continental area of Tamaulipas. Two high-resolution multispectral satellite Satellites Pour l'Observation de la Terre images were employed to quantify the morphologic change of such continental area. The images cover a time span close to 10 years. A variant of the principal component analysis was used to delineate the modification of the land-water line. To quantify changes in texture-relief and texture, principal component analysis was applied to the multispectral images. The first principal components of each image were modeled as a discrete bidimensional vector field. The divergence and Laplacian vector operators were applied to the discrete vector field. The divergence provided the change of texture, while the Laplacian produced the change of texture-relief in the area of study.

  5. CT Dose Optimization in Pediatric Radiology: A Multiyear Effort to Preserve the Benefits of Imaging While Reducing the Risks.

    PubMed

    Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C

    2015-01-01

    The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.

  6. Mapping urban impervious surface using object-based image analysis with WorldView-3 satellite imagery

    NASA Astrophysics Data System (ADS)

    Iabchoon, Sanwit; Wongsai, Sangdao; Chankon, Kanoksuk

    2017-10-01

    Land use and land cover (LULC) data are important to monitor and assess environmental change. LULC classification using satellite images is a method widely used on a global and local scale. Especially, urban areas that have various LULC types are important components of the urban landscape and ecosystem. This study aims to classify urban LULC using WorldView-3 (WV-3) very high-spatial resolution satellite imagery and the object-based image analysis method. A decision rules set was applied to classify the WV-3 images in Kathu subdistrict, Phuket province, Thailand. The main steps were as follows: (1) the image was ortho-rectified with ground control points and using the digital elevation model, (2) multiscale image segmentation was applied to divide the image pixel level into image object level, (3) development of the decision ruleset for LULC classification using spectral bands, spectral indices, spatial and contextual information, and (4) accuracy assessment was computed using testing data, which sampled by statistical random sampling. The results show that seven LULC classes (water, vegetation, open space, road, residential, building, and bare soil) were successfully classified with overall classification accuracy of 94.14% and a kappa coefficient of 92.91%.

  7. Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.

    2012-08-01

    A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.

  8. Textural and Mineralogical Analysis of Volcanic Rocks by µ-XRF Mapping.

    PubMed

    Germinario, Luigi; Cossio, Roberto; Maritan, Lara; Borghi, Alessandro; Mazzoli, Claudio

    2016-06-01

    In this study, µ-XRF was applied as a novel surface technique for quick acquisition of elemental X-ray maps of rocks, image analysis of which provides quantitative information on texture and rock-forming minerals. Bench-top µ-XRF is cost-effective, fast, and non-destructive, can be applied to both large (up to a few tens of cm) and fragile samples, and yields major and trace element analysis with good sensitivity. Here, X-ray mapping was performed with a resolution of 103.5 µm and spot size of 30 µm over sample areas of about 5×4 cm of Euganean trachyte, a volcanic porphyritic rock from the Euganean Hills (NE Italy) traditionally used in cultural heritage. The relative abundance of phenocrysts and groundmass, as well as the size and shape of the various mineral phases, were obtained from image analysis of the elemental maps. The quantified petrographic features allowed identification of various extraction sites, revealing an objective method for archaeometric provenance studies exploiting µ-XRF imaging.

  9. Accelerated Optical Projection Tomography Applied to In Vivo Imaging of Zebrafish

    PubMed Central

    Correia, Teresa; Yin, Jun; Ramel, Marie-Christine; Andrews, Natalie; Katan, Matilda; Bugeon, Laurence; Dallman, Margaret J.; McGinty, James; Frankel, Paul; French, Paul M. W.; Arridge, Simon

    2015-01-01

    Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds. PMID:26308086

  10. The Tonya Harding Controversy: An Analysis of Image Restoration Strategies.

    ERIC Educational Resources Information Center

    Benoit, William L.; Hanczor, Robert S.

    1994-01-01

    Analyzes Tonya Harding's defense of her image in "Eye to Eye with Connie Chung," applying the theory of image restoration discourse. Finds that the principal strategies employed in her behalf were bolstering, denial, and attacking her accuser, but that these strategies were not developed very effectively in this instance. (SR)

  11. Apply radiomics approach for early stage prognostic evaluation of ovarian cancer patients: a preliminary study

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille; Moxley, Katherine; Moore, Kathleen; Mannel, Robert; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2017-03-01

    Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838+/-0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.

  12. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  13. Temporal Processing of Dynamic Positron Emission Tomography via Principal Component Analysis in the Sinogram Domain

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Parker, B. J.; Feng, D. D.; Fulton, R.

    2004-10-01

    In this paper, we compare various temporal analysis schemes applied to dynamic PET for improved quantification, image quality and temporal compression purposes. We compare an optimal sampling schedule (OSS) design, principal component analysis (PCA) applied in the image domain, and principal component analysis applied in the sinogram domain; for region-of-interest quantification, sinogram-domain PCA is combined with the Huesman algorithm to quantify from the sinograms directly without requiring reconstruction of all PCA channels. Using a simulated phantom FDG brain study and three clinical studies, we evaluate the fidelity of the compressed data for estimation of local cerebral metabolic rate of glucose by a four-compartment model. Our results show that using a noise-normalized PCA in the sinogram domain gives similar compression ratio and quantitative accuracy to OSS, but with substantially better precision. These results indicate that sinogram-domain PCA for dynamic PET can be a useful preprocessing stage for PET compression and quantification applications.

  14. Digital diagnosis of medical images

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Kuismin, Raimo; Jormalainen, Raimo; Dastidar, Prasun; Frey, Harry; Eskola, Hannu

    2001-08-01

    The popularity of digital imaging devices and PACS installations has increased during the last years. Still, images are analyzed and diagnosed using conventional techniques. Our research group begun to study the requirements for digital image diagnostic methods to be applied together with PACS systems. The research was focused on various image analysis procedures (e.g., segmentation, volumetry, 3D visualization, image fusion, anatomic atlas, etc.) that could be useful in medical diagnosis. We have developed Image Analysis software (www.medimag.net) to enable several image-processing applications in medical diagnosis, such as volumetry, multimodal visualization, and 3D visualizations. We have also developed a commercial scalable image archive system (ActaServer, supports DICOM) based on component technology (www.acta.fi), and several telemedicine applications. All the software and systems operate in NT environment and are in clinical use in several hospitals. The analysis software have been applied in clinical work and utilized in numerous patient cases (500 patients). This method has been used in the diagnosis, therapy and follow-up in various diseases of the central nervous system (CNS), respiratory system (RS) and human reproductive system (HRS). In many of these diseases e.g. Systemic Lupus Erythematosus (CNS), nasal airways diseases (RS) and ovarian tumors (HRS), these methods have been used for the first time in clinical work. According to our results, digital diagnosis improves diagnostic capabilities, and together with PACS installations it will become standard tool during the next decade by enabling more accurate diagnosis and patient follow-up.

  15. Lucky Imaging: Improved Localization Accuracy for Single Molecule Imaging

    PubMed Central

    Cronin, Bríd; de Wet, Ben; Wallace, Mark I.

    2009-01-01

    We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells. PMID:19348772

  16. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  17. Magneto-optical imaging technique for hostile environments: The ghost imaging approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meda, A.; Caprile, A.; Avella, A.

    2015-06-29

    In this paper, we develop an approach to magneto optical imaging (MOI), applying a ghost imaging (GI) protocol to perform Faraday microscopy. MOI is of the utmost importance for the investigation of magnetic properties of material samples, through Weiss domains shape, dimension and dynamics analysis. Nevertheless, in some extreme conditions such as cryogenic temperatures or high magnetic field applications, there exists a lack of domain images due to the difficulty in creating an efficient imaging system in such environments. Here, we present an innovative MOI technique that separates the imaging optical path from the one illuminating the object. The techniquemore » is based on thermal light GI and exploits correlations between light beams to retrieve the image of magnetic domains. As a proof of principle, the proposed technique is applied to the Faraday magneto-optical observation of the remanence domain structure of an yttrium iron garnet sample.« less

  18. Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment

    NASA Astrophysics Data System (ADS)

    David, S.; Visvikis, D.; Roux, C.; Hatt, M.

    2011-09-01

    In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

  19. Evaluation of the veracity of one work by the artist Di Cavalcanti through non-destructive techniques: XRF, imaging and brush stroke analysis

    NASA Astrophysics Data System (ADS)

    Kajiya, E. A. M.; Campos, P. H. O. V.; Rizzutto, M. A.; Appoloni, C. R.; Lopes, F.

    2014-02-01

    This paper presents systematic studies and analysis that contributed to the identification of the forgery of a work by the artist Emiliano Augusto Cavalcanti de Albuquerque e Melo, known as Di Cavalcanti. The use of several areas of expertise such as brush stroke analysis ("pinacologia"), applied physics, and art history resulted in an accurate diagnosis for ascertaining the authenticity of the work entitled "Violeiro" (1950). For this work we used non-destructive methods such as techniques of infrared, ultraviolet, visible and tangential light imaging combined with chemical analysis of the pigments by portable X-Ray Fluorescence (XRF) and graphic gesture analysis. Each applied method of analysis produced specific information that made possible the identification of materials and techniques employed and we concluded that this work is not consistent with patterns characteristic of the artist Di Cavalcanti.

  20. Image contrast of diffraction-limited telescopes for circular incoherent sources of uniform radiance

    NASA Technical Reports Server (NTRS)

    Shackleford, W. L.

    1980-01-01

    A simple approximate formula is derived for the background intensity beyond the edge of the image of uniform incoherent circular light source relative to the irradiance near the center of the image. The analysis applies to diffraction-limited telescopes with or without central beam obscuration due to a secondary mirror. Scattering off optical surfaces is neglected. The analysis is expected to be most applicable to spaceborne IR telescopes, for which diffraction can be the major source of off-axis response.

  1. X-Ray Imaging Applied to Problems in Planetary Materials

    NASA Technical Reports Server (NTRS)

    Jurewicz, A. J. G.; Mih, D. T.; Jones, S. M.; Connolly, H.

    2000-01-01

    Real-time radiography (X-ray imaging) can be a useful tool for tasks such as (1) the non-destructive, preliminary examination of opaque samples and (2) optimizing how to section opaque samples for more traditional microscopy and chemical analysis.

  2. Higuchi Dimension of Digital Images

    PubMed Central

    Ahammer, Helmut

    2011-01-01

    There exist several methods for calculating the fractal dimension of objects represented as 2D digital images. For example, Box counting, Minkowski dilation or Fourier analysis can be employed. However, there appear to be some limitations. It is not possible to calculate only the fractal dimension of an irregular region of interest in an image or to perform the calculations in a particular direction along a line on an arbitrary angle through the image. The calculations must be made for the whole image. In this paper, a new method to overcome these limitations is proposed. 2D images are appropriately prepared in order to apply 1D signal analyses, originally developed to investigate nonlinear time series. The Higuchi dimension of these 1D signals is calculated using Higuchi's algorithm, and it is shown that both regions of interests and directional dependencies can be evaluated independently of the whole picture. A thorough validation of the proposed technique and a comparison of the new method to the Fourier dimension, a common two dimensional method for digital images, are given. The main result is that Higuchi's algorithm allows a direction dependent as well as direction independent analysis. Actual values for the fractal dimensions are reliable and an effective treatment of regions of interests is possible. Moreover, the proposed method is not restricted to Higuchi's algorithm, as any 1D method of analysis, can be applied. PMID:21931854

  3. Digital imaging and image analysis applied to numerical applications in forensic hair examination.

    PubMed

    Brooks, Elizabeth; Comber, Bruce; McNaught, Ian; Robertson, James

    2011-03-01

    A method that provides objective data to complement the hair analysts' microscopic observations, which is non-destructive, would be of obvious benefit in the forensic examination of hairs. This paper reports on the use of objective colour measurement and image analysis techniques of auto-montaged images. Brown Caucasian telogen scalp hairs were chosen as a stern test of the utility of these approaches. The results show the value of using auto-montaged images and the potential for the use of objective numerical measures of colour and pigmentation to complement microscopic observations. 2010. Published by Elsevier Ireland Ltd. All rights reserved.

  4. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  5. Practical quantification of necrosis in histological whole-slide images.

    PubMed

    Homeyer, André; Schenk, Andrea; Arlt, Janine; Dahmen, Uta; Dirsch, Olaf; Hahn, Horst K

    2013-06-01

    Since the histological quantification of necrosis is a common task in medical research and practice, we evaluate different image analysis methods for quantifying necrosis in whole-slide images. In a practical usage scenario, we assess the impact of different classification algorithms and feature sets on both accuracy and computation time. We show how a well-chosen combination of multiresolution features and an efficient postprocessing step enables the accurate quantification necrosis in gigapixel images in less than a minute. The results are general enough to be applied to other areas of histological image analysis as well. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Quantitative analysis for peripheral vascularity assessment based on clinical photoacoustic and ultrasound images

    NASA Astrophysics Data System (ADS)

    Murakoshi, Dai; Hirota, Kazuhiro; Ishii, Hiroyasu; Hashimoto, Atsushi; Ebata, Tetsurou; Irisawa, Kaku; Wada, Takatsugu; Hayakawa, Toshiro; Itoh, Kenji; Ishihara, Miya

    2018-02-01

    Photoacoustic (PA) imaging technology is expected to be applied to clinical assessment for peripheral vascularity. We started a clinical evaluation with the prototype PA imaging system we recently developed. Prototype PA imaging system was composed with in-house Q-switched Alexandrite laser system which emits short-pulsed laser with 750 nm wavelength, handheld ultrasound transducer where illumination optics were integrated and signal processing for PA image reconstruction implemented in the clinical ultrasound (US) system. For the purpose of quantitative assessment of PA images, an image analyzing function has been developed and applied to clinical PA images. In this analyzing function, vascularity derived from PA signal intensity ranged for prescribed threshold was defined as a numerical index of vessel fulfillment and calculated for the prescribed region of interest (ROI). Skin surface was automatically detected by utilizing B-mode image acquired simultaneously with PA image. Skinsurface position is utilized to place the ROI objectively while avoiding unwanted signals such as artifacts which were imposed due to melanin pigment in the epidermal layer which absorbs laser emission and generates strong PA signals. Multiple images were available to support the scanned image set for 3D viewing. PA images for several fingers of patients with systemic sclerosis (SSc) were quantitatively assessed. Since the artifact region is trimmed off in PA images, the visibility of vessels with rather low PA signal intensity on the 3D projection image was enhanced and the reliability of the quantitative analysis was improved.

  7. Imaging techniques in digital forensic investigation: a study using neural networks

    NASA Astrophysics Data System (ADS)

    Williams, Godfried

    2006-09-01

    Imaging techniques have been applied to a number of applications, such as translation and classification problems in medicine and defence. This paper examines the application of imaging techniques in digital forensics investigation using neural networks. A review of applications of digital image processing is presented, whiles a Pedagogical analysis of computer forensics is also highlighted. A data set describing selected images in different forms are used in the simulation and experimentation.

  8. Automated X-ray quality control of catalytic converters

    NASA Astrophysics Data System (ADS)

    Shashishekhar, N.; Veselitza, D.

    2017-02-01

    Catalytic converters are devices attached to the exhaust system of automobile or other engines to eliminate or substantially reduce polluting emissions. They consist of coated substrates enclosed in a stainless steel housing. The substrate is typically made of ceramic honeycombs; however stainless steel foil honeycombs are also used. The coating is usually a slurry of alumina, silica, rare earth oxides and platinum group metals. The slurry also known as the wash coat is applied to the substrate in two doses, one on each end of the substrate; in some cases multiple layers of coating are applied. X-ray imaging is used to inspect the applied coating depth on a substrate to confirm compliance with quality requirements. Automated image analysis techniques are employed to measure the coating depth from the X-ray image. Coating depth is assessed by analysis of attenuation line profiles in the image. Edge detection algorithms with noise reduction and outlier rejection are used to calculate the coating depth at a specified point along an attenuation line profile. Quality control of the product is accomplished using several attenuation line profile regions for coating depth measurements, with individual pass or fail criteria specified for each region.

  9. Detection of Fungus Infection on Petals of Rapeseed (Brassica napus L.) Using NIR Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Yan-Ru; Yu, Ke-Qiang; Li, Xiaoli; He, Yong

    2016-12-01

    Infected petals are often regarded as the source for the spread of fungi Sclerotinia sclerotiorum in all growing process of rapeseed (Brassica napus L.) plants. This research aimed to detect fungal infection of rapeseed petals by applying hyperspectral imaging in the spectral region of 874-1734 nm coupled with chemometrics. Reflectance was extracted from regions of interest (ROIs) in the hyperspectral image of each sample. Firstly, principal component analysis (PCA) was applied to conduct a cluster analysis with the first several principal components (PCs). Then, two methods including X-loadings of PCA and random frog (RF) algorithm were used and compared for optimizing wavebands selection. Least squares-support vector machine (LS-SVM) methodology was employed to establish discriminative models based on the optimal and full wavebands. Finally, area under the receiver operating characteristics curve (AUC) was utilized to evaluate classification performance of these LS-SVM models. It was found that LS-SVM based on the combination of all optimal wavebands had the best performance with AUC of 0.929. These results were promising and demonstrated the potential of applying hyperspectral imaging in fungus infection detection on rapeseed petals.

  10. Score-level fusion of two-dimensional and three-dimensional palmprint for personal recognition systems

    NASA Astrophysics Data System (ADS)

    Chaa, Mourad; Boukezzoula, Naceur-Eddine; Attia, Abdelouahab

    2017-01-01

    Two types of scores extracted from two-dimensional (2-D) and three-dimensional (3-D) palmprint for personal recognition systems are merged, introducing a local image descriptor for 2-D palmprint-based recognition systems, named bank of binarized statistical image features (B-BSIF). The main idea of B-BSIF is that the extracted histograms from the binarized statistical image features (BSIF) code images (the results of applying the different BSIF descriptor size with the length 12) are concatenated into one to produce a large feature vector. 3-D palmprint contains the depth information of the palm surface. The self-quotient image (SQI) algorithm is applied for reconstructing illumination-invariant 3-D palmprint images. To extract discriminative Gabor features from SQI images, Gabor wavelets are defined and used. Indeed, the dimensionality reduction methods have shown their ability in biometrics systems. Given this, a principal component analysis (PCA)+linear discriminant analysis (LDA) technique is employed. For the matching process, the cosine Mahalanobis distance is applied. Extensive experiments were conducted on a 2-D and 3-D palmprint database with 10,400 range images from 260 individuals. Then, a comparison was made between the proposed algorithm and other existing methods in the literature. Results clearly show that the proposed framework provides a higher correct recognition rate. Furthermore, the best results were obtained by merging the score of B-BSIF descriptor with the score of the SQI+Gabor wavelets+PCA+LDA method, yielding an equal error rate of 0.00% and a recognition rate of rank-1=100.00%.

  11. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  12. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, L; Nyflot, M; Ford, E

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less

  13. Magnetic Resonance Imaging Assessment of the Velopharyngeal Mechanism at Rest and during Speech in Chinese Adults and Children

    ERIC Educational Resources Information Center

    Tian, Wei; Yin, Heng; Redett, Richard J.; Shi, Bing; Shi, Jin; Zhang, Rui; Zheng, Qian

    2010-01-01

    Purpose: Recent applications of the magnetic resonance imaging (MRI) technique introduced accurate 3-dimensional measurements of the velopharyngeal mechanism. Further standardization of the data acquisition and analysis protocol was successfully applied to imaging adults at rest and during phonation. This study was designed to test and modify a…

  14. A Multimode Optical Imaging System for Preclinical Applications In Vivo: Technology Development, Multiscale Imaging, and Chemotherapy Assessment

    PubMed Central

    Hwang, Jae Youn; Wachsmann-Hogiu, Sebastian; Ramanujan, V. Krishnan; Ljubimova, Julia; Gross, Zeev; Gray, Harry B.; Medina-Kauwe, Lali K.; Farkas, Daniel L.

    2012-01-01

    Purpose Several established optical imaging approaches have been applied, usually in isolation, to preclinical studies; however, truly useful in vivo imaging may require a simultaneous combination of imaging modalities to examine dynamic characteristics of cells and tissues. We developed a new multimode optical imaging system designed to be application-versatile, yielding high sensitivity, and specificity molecular imaging. Procedures We integrated several optical imaging technologies, including fluorescence intensity, spectral, lifetime, intravital confocal, two-photon excitation, and bioluminescence, into a single system that enables functional multiscale imaging in animal models. Results The approach offers a comprehensive imaging platform for kinetic, quantitative, and environmental analysis of highly relevant information, with micro-to-macroscopic resolution. Applied to small animals in vivo, this provides superior monitoring of processes of interest, represented here by chemo-/nanoconstruct therapy assessment. Conclusions This new system is versatile and can be optimized for various applications, of which cancer detection and targeted treatment are emphasized here. PMID:21874388

  15. Deferred slanted-edge analysis: a unified approach to spatial frequency response measurement on distorted images and color filter array subsets.

    PubMed

    van den Bergh, F

    2018-03-01

    The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.

  16. On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-21

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  17. On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.

    PubMed

    Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore

    2012-03-01

    In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.

  18. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    NASA Astrophysics Data System (ADS)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  19. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.

    PubMed

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-02-11

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  20. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy

    PubMed Central

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-01-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794

  1. Object-based land cover classification based on fusion of multifrequency SAR data and THAICHOTE optical imagery

    NASA Astrophysics Data System (ADS)

    Sukawattanavijit, Chanika; Srestasathiern, Panu

    2017-10-01

    Land Use and Land Cover (LULC) information are significant to observe and evaluate environmental change. LULC classification applying remotely sensed data is a technique popularly employed on a global and local dimension particularly, in urban areas which have diverse land cover types. These are essential components of the urban terrain and ecosystem. In the present, object-based image analysis (OBIA) is becoming widely popular for land cover classification using the high-resolution image. COSMO-SkyMed SAR data was fused with THAICHOTE (namely, THEOS: Thailand Earth Observation Satellite) optical data for land cover classification using object-based. This paper indicates a comparison between object-based and pixel-based approaches in image fusion. The per-pixel method, support vector machines (SVM) was implemented to the fused image based on Principal Component Analysis (PCA). For the objectbased classification was applied to the fused images to separate land cover classes by using nearest neighbor (NN) classifier. Finally, the accuracy assessment was employed by comparing with the classification of land cover mapping generated from fused image dataset and THAICHOTE image. The object-based data fused COSMO-SkyMed with THAICHOTE images demonstrated the best classification accuracies, well over 85%. As the results, an object-based data fusion provides higher land cover classification accuracy than per-pixel data fusion.

  2. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  3. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    NASA Astrophysics Data System (ADS)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  4. Optic disc segmentation: level set methods and blood vessels inpainting

    NASA Astrophysics Data System (ADS)

    Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-03-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.

  5. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography

    PubMed Central

    Sidky, Emil Y.; Kraemer, David N.; Roth, Erin G.; Ullberg, Christer; Reiser, Ingrid S.; Pan, Xiaochuan

    2014-01-01

    Abstract. One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data. PMID:25685824

  6. Analysis of iterative region-of-interest image reconstruction for x-ray computed tomography.

    PubMed

    Sidky, Emil Y; Kraemer, David N; Roth, Erin G; Ullberg, Christer; Reiser, Ingrid S; Pan, Xiaochuan

    2014-10-03

    One of the challenges for iterative image reconstruction (IIR) is that such algorithms solve an imaging model implicitly, requiring a complete representation of the scanned subject within the viewing domain of the scanner. This requirement can place a prohibitively high computational burden for IIR applied to x-ray computed tomography (CT), especially when high-resolution tomographic volumes are required. In this work, we aim to develop an IIR algorithm for direct region-of-interest (ROI) image reconstruction. The proposed class of IIR algorithms is based on an optimization problem that incorporates a data fidelity term, which compares a derivative of the estimated data with the available projection data. In order to characterize this optimization problem, we apply it to computer-simulated two-dimensional fan-beam CT data, using both ideal noiseless data and realistic data containing a level of noise comparable to that of the breast CT application. The proposed method is demonstrated for both complete field-of-view and ROI imaging. To demonstrate the potential utility of the proposed ROI imaging method, it is applied to actual CT scanner data.

  7. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.

    PubMed

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-12-01

    Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.

  8. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  9. Image analysis to evaluate the browning degree of banana (Musa spp.) peel.

    PubMed

    Cho, Jeong-Seok; Lee, Hyeon-Jeong; Park, Jung-Hoon; Sung, Jun-Hyung; Choi, Ji-Young; Moon, Kwang-Deog

    2016-03-01

    Image analysis was applied to examine banana peel browning. The banana samples were divided into 3 treatment groups: no treatment and normal packaging (Cont); CO2 gas exchange packaging (CO); normal packaging with an ethylene generator (ET). We confirmed that the browning of banana peels developed more quickly in the CO group than the other groups based on sensory test and enzyme assay. The G (green) and CIE L(∗), a(∗), and b(∗) values obtained from the image analysis sharply increased or decreased in the CO group. And these colour values showed high correlation coefficients (>0.9) with the sensory test results. CIE L(∗)a(∗)b(∗) values using a colorimeter also showed high correlation coefficients but comparatively lower than those of image analysis. Based on this analysis, browning of the banana occurred more quickly for CO2 gas exchange packaging, and image analysis can be used to evaluate the browning of banana peels. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Perceptual security of encrypted images based on wavelet scaling analysis

    NASA Astrophysics Data System (ADS)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2016-08-01

    The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.

  11. A new automated assessment method for contrast-detail images by applying support vector machine and its robustness to nonlinear image processing.

    PubMed

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Yamauchi-Kawaura, Chiyo; Kato, Katsuhiko; Isoda, Haruo

    2013-09-01

    The automated contrast-detail (C-D) analysis methods developed so-far cannot be expected to work well on images processed with nonlinear methods, such as noise reduction methods. Therefore, we have devised a new automated C-D analysis method by applying support vector machine (SVM), and tested for its robustness to nonlinear image processing. We acquired the CDRAD (a commercially available C-D test object) images at a tube voltage of 120 kV and a milliampere-second product (mAs) of 0.5-5.0. A partial diffusion equation based technique was used as noise reduction method. Three radiologists and three university students participated in the observer performance study. The training data for our SVM method was the classification data scored by the one radiologist for the CDRAD images acquired at 1.6 and 3.2 mAs and their noise-reduced images. We also compared the performance of our SVM method with the CDRAD Analyser algorithm. The mean C-D diagrams (that is a plot of the mean of the smallest visible hole diameter vs. hole depth) obtained from our devised SVM method agreed well with the ones averaged across the six human observers for both original and noise-reduced CDRAD images, whereas the mean C-D diagrams from the CDRAD Analyser algorithm disagreed with the ones from the human observers for both original and noise-reduced CDRAD images. In conclusion, our proposed SVM method for C-D analysis will work well for the images processed with the non-linear noise reduction method as well as for the original radiographic images.

  12. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  13. Image processing developments and applications for water quality monitoring and trophic state determination

    NASA Technical Reports Server (NTRS)

    Blackwell, R. J.

    1982-01-01

    Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.

  14. Hyperspectral image analysis for the determination of alteration minerals in geothermal fields: Çürüksu (Denizli) Graben, Turkey

    NASA Astrophysics Data System (ADS)

    Uygur, Merve; Karaman, Muhittin; Kumral, Mustafa

    2016-04-01

    Çürüksu (Denizli) Graben hosts various geothermal fields such as Kızıldere, Yenice, Gerali, Karahayıt, and Tekkehamam. Neotectonic activities, which are caused by extensional tectonism, and deep circulation in sub-volcanic intrusions are heat sources of hydrothermal solutions. The temperature of hydrothermal solutions is between 53 and 260 degree Celsius. Phyllic, argillic, silicic, and carbonatization alterations and various hydrothermal minerals have been identified in various research studies of these areas. Surfaced hydrothermal alteration minerals are one set of potential indicators of geothermal resources. Developing the exploration tools to define the surface indicators of geothermal fields can assist in the recognition of geothermal resources. Thermal and hyperspectral imaging and analysis can be used for defining the surface indicators of geothermal fields. This study tests the hypothesis that hyperspectral image analysis based on EO-1 Hyperion images can be used for the delineation and definition of surfaced hydrothermal alteration in geothermal fields. Hyperspectral image analyses were applied to images covering the geothermal fields whose alteration characteristic are known. To reduce data dimensionality and identify spectral endmembers, Kruse's multi-step process was applied to atmospherically and geometrically-corrected hyperspectral images. Minimum Noise Fraction was used to reduce the spectral dimensions and isolate noise in the images. Extreme pixels were identified from high order MNF bands using the Pixel Purity Index. n-Dimensional Visualization was utilized for unique pixel identification. Spectral similarities between pixel spectral signatures and known endmember spectrum (USGS Spectral Library) were compared with Spectral Angle Mapper Classification. EO-1 Hyperion hyperspectral images and hyperspectral analysis are sensitive to hydrothermal alteration minerals, as their diagnostic spectral signatures span the visible and shortwave infrared seen in geothermal fields. Hyperspectral analysis results indicated that kaolinite, smectite, illite, montmorillonite, and sepiolite minerals were distributed in a wide area, which covered the hot spring outlet. Rectorite, lizardite, richterite, dumortierite, nontronite, erionite, and clinoptilolite were observed occasionally.

  15. Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.

    2015-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.

  16. Direct fusion of geostationary meteorological satellite visible and infrared images based on thermal physical properties.

    PubMed

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-05

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.

  17. Direct Fusion of Geostationary Meteorological Satellite Visible and Infrared Images Based on Thermal Physical Properties

    PubMed Central

    Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing

    2015-01-01

    This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749

  18. Gabor filter for the segmentation of skin lesions from ultrasonographic images

    NASA Astrophysics Data System (ADS)

    Petrella, Lorena I.; Gómez, W.; Alvarenga, André V.; Pereira, Wagner C. A.

    2012-05-01

    The present work applies Gabor filters bank for texture analysis of skin lesions images, obtained by ultrasound biomicroscopy. The regions affected by the lesions were differentiated from surrounding tissue in all the analyzed cases; however the accuracy of the traced borders showed some limitations in part of the images. Future steps are being contemplated, attempting to enhance the technique performance.

  19. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  20. Two-dimensional DFA scaling analysis applied to encrypted images

    NASA Astrophysics Data System (ADS)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2015-01-01

    The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.

  1. Applications of magnetic resonance image segmentation in neurology

    NASA Astrophysics Data System (ADS)

    Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu

    1999-05-01

    After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.

  2. [Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].

    PubMed

    Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang

    2007-02-01

    Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.

  3. Spatiotemporal image correlation analysis of blood flow in branched vessel networks of zebrafish embryos

    NASA Astrophysics Data System (ADS)

    Ceffa, Nicolo G.; Cesana, Ilaria; Collini, Maddalena; D'Alfonso, Laura; Carra, Silvia; Cotelli, Franco; Sironi, Laura; Chirico, Giuseppe

    2017-10-01

    Ramification of blood circulation is relevant in a number of physiological and pathological conditions. The oxygen exchange occurs largely in the capillary bed, and the cancer progression is closely linked to the angiogenesis around the tumor mass. Optical microscopy has made impressive improvements in in vivo imaging and dynamic studies based on correlation analysis of time stacks of images. Here, we develop and test advanced methods that allow mapping the flow fields in branched vessel networks at the resolution of 10 to 20 μm. The methods, based on the application of spatiotemporal image correlation spectroscopy and its extension to cross-correlation analysis, are applied here to the case of early stage embryos of zebrafish.

  4. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  5. Topologic analysis and comparison of brain activation in children with epilepsy versus controls: an fMRI study

    NASA Astrophysics Data System (ADS)

    Oweis, Khalid J.; Berl, Madison M.; Gaillard, William D.; Duke, Elizabeth S.; Blackstone, Kaitlin; Loew, Murray H.; Zara, Jason M.

    2010-03-01

    This paper describes the development of novel computer-aided analysis algorithms to identify the language activation patterns at a certain Region of Interest (ROI) in Functional Magnetic Resonance Imaging (fMRI). Previous analysis techniques have been used to compare typical and pathologic activation patterns in fMRI images resulting from identical tasks but none of them analyzed activation topographically in a quantitative manner. This paper presents new analysis techniques and algorithms capable of identifying a pattern of language activation associated with localization related epilepsy. fMRI images of 64 healthy individuals and 31 patients with localization related epilepsy have been studied and analyzed on an ROI basis. All subjects are right handed with normal MRI scans and have been classified into three age groups (4-6, 7-9, 10-12 years). Our initial efforts have focused on investigating activation in the Left Inferior Frontal Gyrus (LIFG). A number of volumetric features have been extracted from the data. The LIFG has been cut into slices and the activation has been investigated topographically on a slice by slice basis. Overall, a total of 809 features have been extracted, and correlation analysis was applied to eliminate highly correlated features. Principal Component analysis was then applied to account only for major components in the data and One-Way Analysis of Variance (ANOVA) has been applied to test for significantly different features between normal and patient groups. Twenty Nine features have were found to be significantly different (p<0.05) between patient and control groups

  6. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  7. SAR image change detection using watershed and spectral clustering

    NASA Astrophysics Data System (ADS)

    Niu, Ruican; Jiao, L. C.; Wang, Guiting; Feng, Jie

    2011-12-01

    A new method of change detection in SAR images based on spectral clustering is presented in this paper. Spectral clustering is employed to extract change information from a pair images acquired on the same geographical area at different time. Watershed transform is applied to initially segment the big image into non-overlapped local regions, leading to reduce the complexity. Experiments results and system analysis confirm the effectiveness of the proposed algorithm.

  8. Segmentation of radiologic images with self-organizing maps: the segmentation problem transformed into a classification task

    NASA Astrophysics Data System (ADS)

    Pelikan, Erich; Vogelsang, Frank; Tolxdorff, Thomas

    1996-04-01

    The texture-based segmentation of x-ray images of focal bone lesions using topological maps is introduced. Texture characteristics are described by image-point correlation of feature images to feature vectors. For the segmentation, the topological map is labeled using an improved labeling strategy. Results of the technique are demonstrated on original and synthetic x-ray images and quantified with the aid of quality measures. In addition, a classifier-specific contribution analysis is applied for assessing the feature space.

  9. Contrast improvement of terahertz images of thin histopathologic sections

    PubMed Central

    Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio

    2011-01-01

    We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast. PMID:21326635

  10. Contrast improvement of terahertz images of thin histopathologic sections.

    PubMed

    Formanek, Florian; Brun, Marc-Aurèle; Yasuda, Akio

    2010-12-03

    We present terahertz images of 10 μm thick histopathologic sections obtained in reflection geometry with a time-domain spectrometer, and demonstrate improved contrast for sections measured in paraffin with water. Automated segmentation is applied to the complex refractive index data to generate clustered terahertz images distinguishing cancer from healthy tissues. The degree of classification of pixels is then evaluated using registered visible microscope images. Principal component analysis and propagation simulations are employed to investigate the origin and the gain of image contrast.

  11. Detection of Pigment Networks in Dermoscopy Images

    NASA Astrophysics Data System (ADS)

    Eltayef, Khalid; Li, Yongmin; Liu, Xiaohui

    2017-02-01

    One of the most important structures in dermoscopy images is the pigment network, which is also one of the most challenging and fundamental task for dermatologists in early detection of melanoma. This paper presents an automatic system to detect pigment network from dermoscopy images. The design of the proposed algorithm consists of four stages. First, a pre-processing algorithm is carried out in order to remove the noise and improve the quality of the image. Second, a bank of directional filters and morphological connected component analysis are applied to detect the pigment networks. Third, features are extracted from the detected image, which can be used in the subsequent stage. Fourth, the classification process is performed by applying feed-forward neural network, in order to classify the region as either normal or abnormal skin. The method was tested on a dataset of 200 dermoscopy images from Hospital Pedro Hispano (Matosinhos), and better results were produced compared to previous studies.

  12. Follow-up of solar lentigo depigmentation with a retinaldehyde-based cream by clinical evaluation and calibrated colour imaging.

    PubMed

    Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G

    2015-05-01

    To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Tissue classification for laparoscopic image understanding based on multispectral texture analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena

    2016-03-01

    Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.

  14. Quantitative Analysis Tools and Digital Phantoms for Deformable Image Registration Quality Assurance.

    PubMed

    Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W

    2015-08-01

    This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.

  15. Detection of explosives on the surface of banknotes by Raman hyperspectral imaging and independent component analysis.

    PubMed

    Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J

    2015-02-20

    The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Strategy for reliable strain measurement in InAs/GaAs materials from high-resolution Z-contrast STEM images

    NASA Astrophysics Data System (ADS)

    Vatanparast, Maryam; Vullum, Per Erik; Nord, Magnus; Zuo, Jian-Min; Reenaas, Turid W.; Holmestad, Randi

    2017-09-01

    Geometric phase analysis (GPA), a fast and simple Fourier space method for strain analysis, can give useful information on accumulated strain and defect propagation in multiple layers of semiconductors, including quantum dot materials. In this work, GPA has been applied to high resolution Z-contrast scanning transmission electron microscopy (STEM) images. Strain maps determined from different g vectors of these images are compared to each other, in order to analyze and assess the GPA technique in terms of accuracy. The SmartAlign tool has been used to improve the STEM image quality getting more reliable results. Strain maps from template matching as a real space approach are compared with strain maps from GPA, and it is discussed that a real space analysis is a better approach than GPA for aberration corrected STEM images.

  17. Nanoscale deformation analysis with high-resolution transmission electron microscopy and digital image correlation

    DOE PAGES

    Wang, Xueju; Pan, Zhipeng; Fan, Feifei; ...

    2015-09-10

    We present an application of the digital image correlation (DIC) method to high-resolution transmission electron microscopy (HRTEM) images for nanoscale deformation analysis. The combination of DIC and HRTEM offers both the ultrahigh spatial resolution and high displacement detection sensitivity that are not possible with other microscope-based DIC techniques. We demonstrate the accuracy and utility of the HRTEM-DIC technique through displacement and strain analysis on amorphous silicon. Two types of error sources resulting from the transmission electron microscopy (TEM) image noise and electromagnetic-lens distortions are quantitatively investigated via rigid-body translation experiments. The local and global DIC approaches are applied for themore » analysis of diffusion- and reaction-induced deformation fields in electrochemically lithiated amorphous silicon. As a result, the DIC technique coupled with HRTEM provides a new avenue for the deformation analysis of materials at the nanometer length scales.« less

  18. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  19. Non-destructive, high-content analysis of wheat grain traits using X-ray micro computed tomography.

    PubMed

    Hughes, Nathan; Askew, Karen; Scotson, Callum P; Williams, Kevin; Sauze, Colin; Corke, Fiona; Doonan, John H; Nibau, Candida

    2017-01-01

    Wheat is one of the most widely grown crop in temperate climates for food and animal feed. In order to meet the demands of the predicted population increase in an ever-changing climate, wheat production needs to dramatically increase. Spike and grain traits are critical determinants of final yield and grain uniformity a commercially desired trait, but their analysis is laborious and often requires destructive harvest. One of the current challenges is to develop an accurate, non-destructive method for spike and grain trait analysis capable of handling large populations. In this study we describe the development of a robust method for the accurate extraction and measurement of spike and grain morphometric parameters from images acquired by X-ray micro-computed tomography (μCT). The image analysis pipeline developed automatically identifies plant material of interest in μCT images, performs image analysis, and extracts morphometric data. As a proof of principle, this integrated methodology was used to analyse the spikes from a population of wheat plants subjected to high temperatures under two different water regimes. Temperature has a negative effect on spike height and grain number with the middle of the spike being the most affected region. The data also confirmed that increased grain volume was correlated with the decrease in grain number under mild stress. Being able to quickly measure plant phenotypes in a non-destructive manner is crucial to advance our understanding of gene function and the effects of the environment. We report on the development of an image analysis pipeline capable of accurately and reliably extracting spike and grain traits from crops without the loss of positional information. This methodology was applied to the analysis of wheat spikes can be readily applied to other economically important crop species.

  20. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  1. Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images

    NASA Technical Reports Server (NTRS)

    Sams, Clarence F.

    2016-01-01

    The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.

  2. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  3. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  4. Quantitative imaging biomarkers: the application of advanced image processing and analysis to clinical and preclinical decision making.

    PubMed

    Prescott, Jeffrey William

    2013-02-01

    The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.

  5. 3D FT-IR imaging spectroscopy of phase-separation in a poly(3-hydroxybutyrate)/poly(L-lactic acid) blend

    Treesearch

    Miriam Unger; Julia Sedlmair; Heinz W. Siesler; Carol Hirschmugl; Barbara Illman

    2014-01-01

    In the present study, 3D FT-IR spectroscopic imaging measurements were applied to study the phase separation of a poly(3-hydroxybutyrate) (PHB)/poly(L-lactic acid) (PLA) (50:50 wt.%) polymer blend film. While in 2D projection imaging the z-dependent information is overlapped, thereby complicating the analysis, FT-IR spectro-micro-tomography,...

  6. Image processing of angiograms: A pilot study

    NASA Technical Reports Server (NTRS)

    Larsen, L. E.; Evans, R. A.; Roehm, J. O., Jr.

    1974-01-01

    The technology transfer application this report describes is the result of a pilot study of image-processing methods applied to the image enhancement, coding, and analysis of arteriograms. Angiography is a subspecialty of radiology that employs the introduction of media with high X-ray absorption into arteries in order to study vessel pathology as well as to infer disease of the organs supplied by the vessel in question.

  7. Minimisation of Signal Intensity Differences in Distortion Correction Approaches of Brain Magnetic Resonance Diffusion Tensor Imaging.

    PubMed

    Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol

    2018-04-12

    To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.

  8. Fast algorithm of low power image reformation for OLED display

    NASA Astrophysics Data System (ADS)

    Lee, Myungwoo; Kim, Taewhan

    2014-04-01

    We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.

  9. Magnetic particle imaging for in vivo blood flow velocity measurements in mice

    NASA Astrophysics Data System (ADS)

    Kaul, Michael G.; Salamon, Johannes; Knopp, Tobias; Ittrich, Harald; Adam, Gerhard; Weller, Horst; Jung, Caroline

    2018-03-01

    Magnetic particle imaging (MPI) is a new imaging technology. It is a potential candidate to be used for angiographic purposes, to study perfusion and cell migration. The aim of this work was to measure velocities of the flowing blood in the inferior vena cava of mice, using MPI, and to evaluate it in comparison with magnetic resonance imaging (MRI). A phantom mimicking the flow within the inferior vena cava with velocities of up to 21 cm s‑1 was used for the evaluation of the applied analysis techniques. Time–density and distance–density analyses for bolus tracking were performed to calculate flow velocities. These findings were compared with the calibrated velocities set by a flow pump, and it can be concluded that velocities of up to 21 cm s‑1 can be measured by MPI. A time–density analysis using an arrival time estimation algorithm showed the best agreement with the preset velocities. In vivo measurements were performed in healthy FVB mice (n  =  10). MRI experiments were performed using phase contrast (PC) for velocity mapping. For MPI measurements, a standardized injection of a superparamagnetic iron oxide tracer was applied. In vivo MPI data were evaluated by a time–density analysis and compared to PC MRI. A Bland–Altman analysis revealed good agreement between the in vivo velocities acquired by MRI of 4.0  ±  1.5 cm s‑1 and those measured by MPI of 4.8  ±  1.1 cm s‑1. Magnetic particle imaging is a new tool with which to measure and quantify flow velocities. It is fast, radiation-free, and produces 3D images. It therefore offers the potential for vascular imaging.

  10. Three-dimensional microCT imaging of murine embryonic development from immediate post-implantation to organogenesis: application for phenotyping analysis of early embryonic lethality in mutant animals.

    PubMed

    Ermakova, Olga; Orsini, Tiziana; Gambadoro, Alessia; Chiani, Francesco; Tocchini-Valentini, Glauco P

    2018-04-01

    In this work, we applied three-dimensional microCT imaging to study murine embryogenesis in the range from immediate post-implantation period (embryonic day 5.5) to mid-gestation (embryonic day 12.5) with the resolution up to 1.4 µm/voxel. Also, we introduce an imaging procedure for non-invasive volumetric estimation of an entire litter of embryos within the maternal uterine structures. This method allows for an accurate, detailed and systematic morphometric analysis of both embryonic and extra-embryonic components during embryogenesis. Three-dimensional imaging of unperturbed embryos was performed to visualize the egg cylinder, primitive streak, gastrulation and early organogenesis stages of murine development in the C57Bl6/N mouse reference strain. Further, we applied our microCT imaging protocol to determine the earliest point when embryonic development is arrested in a mouse line with knockout for tRNA splicing endonuclease subunit Tsen54 gene. Our analysis determined that the embryonic development in Tsen54 null embryos does not proceed beyond implantation. We demonstrated that application of microCT imaging to entire litter of non-perturbed embryos greatly facilitate studies to unravel gene function during early embryogenesis and to determine the precise point at which embryonic development is arrested in mutant animals. The described method is inexpensive, does not require lengthy embryos dissection and can be applicable for detailed analysis of mutant mice at laboratory scale as well as for high-throughput projects.

  11. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  12. Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis.

    PubMed

    Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L; Hwang, Jae Youn

    2016-12-01

    We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis.

  13. Automated microscopy system for detection and genetic characterization of fetal nucleated red blood cells on slides

    NASA Astrophysics Data System (ADS)

    Ravkin, Ilya; Temov, Vladimir

    1998-04-01

    The detection and genetic analysis of fetal cells in maternal blood will permit noninvasive prenatal screening for genetic defects. Applied Imaging has developed and is currently evaluating a system for semiautomatic detection of fetal nucleated red blood cells on slides and acquisition of their DNA probe FISH images. The specimens are blood smears from pregnant women (9 - 16 weeks gestation) enriched for nucleated red blood cells (NRBC). The cells are identified by using labeled monoclonal antibodies directed to different types of hemoglobin chains (gamma, epsilon); the nuclei are stained with DAPI. The Applied Imaging system has been implemented with both Olympus BX and Nikon Eclipse series microscopes which were equipped with transmission and fluorescence optics. The system includes the following motorized components: stage, focus, transmission, and fluorescence filter wheels. A video camera with light integration (COHU 4910) permits low light imaging. The software capabilities include scanning, relocation, autofocusing, feature extraction, facilities for operator review, and data analysis. Detection of fetal NRBCs is achieved by employing a combination of brightfield and fluorescence images of nuclear and cytoplasmic markers. The brightfield and fluorescence images are all obtained with a single multi-bandpass dichroic mirror. A Z-stack of DNA probe FISH images is acquired by moving focus and switching excitation filters. This stack is combined to produce an enhanced image for presentation and spot counting.

  14. Quality Control of Structural MRI Images Applied Using FreeSurfer—A Hands-On Workflow to Rate Motion Artifacts

    PubMed Central

    Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.

    2016-01-01

    In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528

  15. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  16. 3D digital image processing for biofilm quantification from confocal laser scanning microscopy: Multidimensional statistical analysis of biofilm modeling

    NASA Astrophysics Data System (ADS)

    Zielinski, Jerzy S.

    The dramatic increase in number and volume of digital images produced in medical diagnostics, and the escalating demand for rapid access to these relevant medical data, along with the need for interpretation and retrieval has become of paramount importance to a modern healthcare system. Therefore, there is an ever growing need for processed, interpreted and saved images of various types. Due to the high cost and unreliability of human-dependent image analysis, it is necessary to develop an automated method for feature extraction, using sophisticated mathematical algorithms and reasoning. This work is focused on digital image signal processing of biological and biomedical data in one- two- and three-dimensional space. Methods and algorithms presented in this work were used to acquire data from genomic sequences, breast cancer, and biofilm images. One-dimensional analysis was applied to DNA sequences which were presented as a non-stationary sequence and modeled by a time-dependent autoregressive moving average (TD-ARMA) model. Two-dimensional analyses used 2D-ARMA model and applied it to detect breast cancer from x-ray mammograms or ultrasound images. Three-dimensional detection and classification techniques were applied to biofilm images acquired using confocal laser scanning microscopy. Modern medical images are geometrically arranged arrays of data. The broadening scope of imaging as a way to organize our observations of the biophysical world has led to a dramatic increase in our ability to apply new processing techniques and to combine multiple channels of data into sophisticated and complex mathematical models of physiological function and dysfunction. With explosion of the amount of data produced in a field of biomedicine, it is crucial to be able to construct accurate mathematical models of the data at hand. Two main purposes of signal modeling are: data size conservation and parameter extraction. Specifically, in biomedical imaging we have four key problems that were addressed in this work: (i) registration, i.e. automated methods of data acquisition and the ability to align multiple data sets with each other; (ii) visualization and reconstruction, i.e. the environment in which registered data sets can be displayed on a plane or in multidimensional space; (iii) segmentation, i.e. automated and semi-automated methods to create models of relevant anatomy from images; (iv) simulation and prediction, i.e. techniques that can be used to simulate growth end evolution of researched phenomenon. Mathematical models can not only be used to verify experimental findings, but also to make qualitative and quantitative predictions, that might serve as guidelines for the future development of technology and/or treatment.

  17. Remote Sensing Soil Moisture Analysis by Unmanned Aerial Vehicles Digital Imaging

    NASA Astrophysics Data System (ADS)

    Yeh, C. Y.; Lin, H. R.; Chen, Y. L.; Huang, S. Y.; Wen, J. C.

    2017-12-01

    In recent years, remote sensing analysis has been able to apply to the research of climate change, environment monitoring, geology, hydro-meteorological, and so on. However, the traditional methods for analyzing wide ranges of surface soil moisture of spatial distribution surveys may require plenty resources besides the high cost. In the past, remote sensing analysis performed soil moisture estimates through shortwave, thermal infrared ray, or infrared satellite, which requires lots of resources, labor, and money. Therefore, the digital image color was used to establish the multiple linear regression model. Finally, we can find out the relationship between surface soil color and soil moisture. In this study, we use the Unmanned Aerial Vehicle (UAV) to take an aerial photo of the fallow farmland. Simultaneously, we take the surface soil sample from 0-5 cm of the surface. The soil will be baking by 110° C and 24 hr. And the software ImageJ 1.48 is applied for the analysis of the digital images and the hue analysis into Red, Green, and Blue (R, G, B) hue values. The correlation analysis is the result from the data obtained from the image hue and the surface soil moisture at each sampling point. After image and soil moisture analysis, we use the R, G, B and soil moisture to establish the multiple regression to estimate the spatial distributions of surface soil moisture. In the result, we compare the real soil moisture and the estimated soil moisture. The coefficient of determination (R2) can achieve 0.5-0.7. The uncertainties in the field test, such as the sun illumination, the sun exposure angle, even the shadow, will affect the result; therefore, R2 can achieve 0.5-0.7 reflects good effect for the in-suit test by using the digital image to estimate the soil moisture. Based on the outcomes of the research, using digital images from UAV to estimate the surface soil moisture is acceptable. However, further investigations need to be collected more than ten days (four times a day) data to verify the relation between the image hue and the soil moisture for reliable moisture estimated model. And it is better to use the digital single lens reflex camera to prevent the deformation of the image and to have a better auto exposure. Keywords: soil, moisture, remote sensing

  18. Identification of important image features for pork and turkey ham classification using colour and wavelet texture features and genetic selection.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy

    2010-04-01

    A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.

  19. Fractal-Based Image Analysis In Radiological Applications

    NASA Astrophysics Data System (ADS)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  20. Crack Imaging and Quantification in Aluminum Plates with Guided Wave Wavenumber Analysis Methods

    NASA Technical Reports Server (NTRS)

    Yu, Lingyu; Tian, Zhenhua; Leckey, Cara A. C.

    2015-01-01

    Guided wavefield analysis methods for detection and quantification of crack damage in an aluminum plate are presented in this paper. New wavenumber components created by abrupt wave changes at the structural discontinuity are identified in the frequency-wavenumber spectra. It is shown that the new wavenumbers can be used to detect and characterize the crack dimensions. Two imaging based approaches, filter reconstructed imaging and spatial wavenumber imaging, are used to demonstrate how the cracks can be evaluated with wavenumber analysis. The filter reconstructed imaging is shown to be a rapid method to map the plate and any existing damage, but with less precision in estimating crack dimensions; while the spatial wavenumber imaging provides an intensity image of spatial wavenumber values with enhanced resolution of crack dimensions. These techniques are applied to simulated wavefield data, and the simulation based studies show that spatial wavenumber imaging method is able to distinguish cracks of different severities. Laboratory experimental validation is performed for a single crack case to confirm the methods' capabilities for imaging cracks in plates.

  1. High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.

    PubMed

    Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C

    2007-10-09

    High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.

  2. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  3. Target recognition of ladar range images using even-order Zernike moments.

    PubMed

    Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi

    2012-11-01

    Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

  4. Calculation of Cardiac Kinetic Energy Index from PET images.

    PubMed

    Sims, John; Oliveira, Marco Antônio; Meneghetti, José Claudio; Gutierrez, Marco Antônio

    2015-01-01

    Cardiac function can be assessed from displacement measurements in imaging modalities from nuclear medicine Using positron emission tomography (PET) image sequences with Rubidium-82, we propose and estimate the total Kinetic Energy Index (KEf) obtained from the velocity field, which was calculated using 3D optical flow(OF) methods applied over the temporal image sequence. However, it was found that the brightness of the image varied unexpectedly between frames, violating the constant brightness assumption of the OF method and causing large errors in estimating the velocity field. Therefore total brightness was equalized across image frames and the adjusted configuration tested with rest perfusion images acquired from individuals with normal (n=30) and low (n=33) cardiac function. For these images KEf was calculated as 0.5731±0.0899 and 0.3812±0.1146 for individuals with normal and low cardiac function respectively. The ability of KEf to properly classify patients into the two groups was tested with a ROC analysis, with area under the curve estimated as 0.906. To our knowledge this is the first time that KEf has been applied to PET images.

  5. Structural scene analysis and content-based image retrieval applied to bone age assessment

    NASA Astrophysics Data System (ADS)

    Fischer, Benedikt; Brosig, André; Deserno, Thomas M.; Ott, Bastian; Günther, Rolf W.

    2009-02-01

    Radiological bone age assessment is based on global or local image regions of interest (ROI), such as epiphyseal regions or the area of carpal bones. Usually, these regions are compared to a standardized reference and a score determining the skeletal maturity is calculated. For computer-assisted diagnosis, automatic ROI extraction is done so far by heuristic approaches. In this work, we apply a high-level approach of scene analysis for knowledge-based ROI segmentation. Based on a set of 100 reference images from the IRMA database, a so called structural prototype (SP) is trained. In this graph-based structure, the 14 phalanges and 5 metacarpal bones are represented by nodes, with associated location, shape, as well as texture parameters modeled by Gaussians. Accordingly, the Gaussians describing the relative positions, relative orientation, and other relative parameters between two nodes are associated to the edges. Thereafter, segmentation of a hand radiograph is done in several steps: (i) a multi-scale region merging scheme is applied to extract visually prominent regions; (ii) a graph/sub-graph matching to the SP robustly identifies a subset of the 19 bones; (iii) the SP is registered to the current image for complete scene-reconstruction (iv) the epiphyseal regions are extracted from the reconstructed scene. The evaluation is based on 137 images of Caucasian males from the USC hand atlas. Overall, an error rate of 32% is achieved, for the 6 middle distal and medial/distal epiphyses, 23% of all extractions need adjustments. On average 9.58 of the 14 epiphyseal regions were extracted successfully per image. This is promising for further use in content-based image retrieval (CBIR) and CBIR-based automatic bone age assessment.

  6. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    PubMed

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.

  7. Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soyoung

    Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images. Conclusions: The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.« less

  8. Estimation of Bridge Height over Water from Polarimetric SAR Image Data Using Mapping and Projection Algorithm and De-Orientation Theory

    NASA Astrophysics Data System (ADS)

    Wang, Haipeng; Xu, Feng; Jin, Ya-Qiu; Ouchi, Kazuo

    An inversion method of bridge height over water by polarimetric synthetic aperture radar (SAR) is developed. A geometric ray description to illustrate scattering mechanism of a bridge over water surface is identified by polarimetric image analysis. Using the mapping and projecting algorithm, a polarimetric SAR image of a bridge model is first simulated and shows that scattering from a bridge over water can be identified by three strip lines corresponding to single-, double-, and triple-order scattering, respectively. A set of polarimetric parameters based on the de-orientation theory is applied to analysis of three types scattering, and the thinning-clustering algorithm and Hough transform are then employed to locate the image positions of these strip lines. These lines are used to invert the bridge height. Fully polarimetric image data of airborne Pi-SAR at X-band are applied to inversion of the height and width of the Naruto Bridge in Japan. Based on the same principle, this approach is also applicable to spaceborne ALOSPALSAR single-polarization data of the Eastern Ocean Bridge in China. The results show good feasibility to realize the bridge height inversion.

  9. Image Quality Ranking Method for Microscopy

    PubMed Central

    Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.

    2016-01-01

    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703

  10. A method for determining electrophoretic and electroosmotic mobilities using AC and DC electric field particle displacements.

    PubMed

    Oddy, M H; Santiago, J G

    2004-01-01

    We have developed a method for measuring the electrophoretic mobility of submicrometer, fluorescently labeled particles and the electroosmotic mobility of a microchannel. We derive explicit expressions for the unknown electrophoretic and the electroosmotic mobilities as a function of particle displacements resulting from alternating current (AC) and direct current (DC) applied electric fields. Images of particle displacements are captured using an epifluorescent microscope and a CCD camera. A custom image-processing code was developed to determine image streak lengths associated with AC measurements, and a custom particle tracking velocimetry (PTV) code was devised to determine DC particle displacements. Statistical analysis was applied to relate mobility estimates to measured particle displacement distributions.

  11. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  12. Combined spectral-domain optical coherence tomography and hyperspectral imaging applied for tissue analysis: Preliminary results

    NASA Astrophysics Data System (ADS)

    Dontu, S.; Miclos, S.; Savastru, D.; Tautan, M.

    2017-09-01

    In recent years many optoelectronic techniques have been developed for improvement and the development of devices for tissue analysis. Spectral-Domain Optical Coherence Tomography (SD-OCT) is a new medical interferometric imaging modality that provides depth resolved tissue structure information with resolution in the μm range. However, SD-OCT has its own limitations and cannot offer the biochemical information of the tissue. These data can be obtained with hyperspectral imaging, a non-invasive, sensitive and real time technique. In the present study we have combined Spectral-Domain Optical Coherence Tomography (SD-OCT) with Hyperspectral imaging (HSI) for tissue analysis. The Spectral-Domain Optical Coherence Tomography (SD-OCT) and Hyperspectral imaging (HSI) are two methods that have demonstrated significant potential in this context. Preliminary results using different tissue have highlighted the capabilities of this technique of combinations.

  13. Detection of changes in semi-natural grasslands by cross correlation analysis with WorldView-2 images and new Landsat 8 data.

    PubMed

    Tarantino, Cristina; Adamo, Maria; Lucas, Richard; Blonda, Palma

    2016-03-15

    Focusing on a Mediterranean Natura 2000 site in Italy, the effectiveness of the cross correlation analysis (CCA) technique for quantifying change in the area of semi-natural grasslands at different spatial resolutions (grain) was evaluated. In a fine scale analysis (2 m), inputs to the CCA were a) a semi-natural grasslands layer extracted from an existing validated land cover/land use (LC/LU) map (1:5000, time T 1 ) and b) a more recent single date very high resolution (VHR) WorldView-2 image (time T 2 ), with T 2  > T 1 . The changes identified through the CCA were compared against those detected by applying a traditional post-classification comparison (PCC) technique to the same reference T 1 map and an updated T 2 map obtained by a knowledge driven classification of four multi-seasonal Worldview-2 input images. Specific changes observed were those associated with agricultural intensification and fires. The study concluded that prior knowledge (spectral class signatures, awareness of local agricultural practices and pressures) was needed for the selection of the most appropriate image (in terms of seasonality) to be acquired at T 2 . CCA was also applied to the comparison of the existing T 1 map with recent high resolution (HR) Landsat 8 OLS images. The areas of change detected at VHR and HR were broadly similar with larger error values in HR change images.

  14. Visual Recognition Software for Binary Classification and Its Application to Spruce Pollen Identification

    PubMed Central

    Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.

    2016-01-01

    Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017

  15. Radar Imaging Using The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.

    1989-12-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.

  16. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  17. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  18. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  19. An Unsupervised Approach for Extraction of Blood Vessels from Fundus Images.

    PubMed

    Dash, Jyotiprava; Bhoi, Nilamani

    2018-04-26

    Pathological disorders may happen due to small changes in retinal blood vessels which may later turn into blindness. Hence, the accurate segmentation of blood vessels is becoming a challenging task for pathological analysis. This paper offers an unsupervised recursive method for extraction of blood vessels from ophthalmoscope images. First, a vessel-enhanced image is generated with the help of gamma correction and contrast-limited adaptive histogram equalization (CLAHE). Next, the vessels are extracted iteratively by applying an adaptive thresholding technique. At last, a final vessel segmented image is produced by applying a morphological cleaning operation. Evaluations are accompanied on the publicly available digital retinal images for vessel extraction (DRIVE) and Child Heart And Health Study in England (CHASE_DB1) databases using nine different measurements. The proposed method achieves average accuracies of 0.957 and 0.952 on DRIVE and CHASE_DB1 databases respectively.

  20. Chapter 14: Electron Microscopy on Thin Films for Solar Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Manuel; Abou-Ras, Daniel; Nichterwitz, Melanie

    2016-07-22

    This chapter overviews the various techniques applied in scanning electron microscopy (SEM) and transmission electron microscopy (TEM), and highlights their possibilities and also limitations. It gives the various imaging and analysis techniques applied on a scanning electron microscope. The chapter shows that imaging is divided into that making use of secondary electrons (SEs) and of backscattered electrons (BSEs), resulting in different contrasts in the images and thus providing information on compositions, microstructures, and surface potentials. Whenever aiming for imaging and analyses at scales of down to the angstroms range, TEM and its related techniques are appropriate tools. In many cases,more » also SEM techniques provide the access to various material properties of the individual layers, not requiring specimen preparation as time consuming as TEM techniques. Finally, the chapter dedicates to cross-sectional specimen preparation for electron microscopy. The preparation decides indeed on the quality of imaging and analyses.« less

  1. New approach for cognitive analysis and understanding of medical patterns and visualizations

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2003-11-01

    This paper presents new opportunities for applying linguistic description of the picture merit content and AI methods to undertake tasks of the automatic understanding of images semantics in intelligent medical information systems. A successful obtaining of the crucial semantic content of the medical image may contribute considerably to the creation of new intelligent multimedia cognitive medical systems. Thanks to the new idea of cognitive resonance between stream of the data extracted from the image using linguistic methods and expectations taken from the representaion of the medical knowledge, it is possible to understand the merit content of the image even if teh form of the image is very different from any known pattern. This article proves that structural techniques of artificial intelligence may be applied in the case of tasks related to automatic classification and machine perception based on semantic pattern content in order to determine the semantic meaning of the patterns. In the paper are described some examples presenting ways of applying such techniques in the creation of cognitive vision systems for selected classes of medical images. On the base of scientific research described in the paper we try to build some new systems for collecting, storing, retrieving and intelligent interpreting selected medical images especially obtained in radiological and MRI examinations.

  2. Applying Image Matching to Video Analysis

    DTIC Science & Technology

    2010-09-01

    image groups, classified by the background scene, are the flag, the kitchen, the telephone, the bookshelf , the title screen, the...Kitchen 136 Telephone 3 Bookshelf 81 Title Screen 10 Map 1 24 Map 2 16 command line. This implementation of a Bloom filter uses two arbitrary...with the Bookshelf images. This scene is a much closer shot than the Kitchen scene so the host occupies much of the background. Algorithms for face

  3. Analysis of the Radiometric Response of Orange Tree Crown in Hyperspectral Uav Images

    NASA Astrophysics Data System (ADS)

    Imai, N. N.; Moriya, E. A. S.; Honkavaara, E.; Miyoshi, G. T.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    High spatial resolution remote sensing images acquired by drones are highly relevant data source in many applications. However, strong variations of radiometric values are difficult to correct in hyperspectral images. Honkavaara et al. (2013) presented a radiometric block adjustment method in which hyperspectral images taken from remotely piloted aerial systems - RPAS were processed both geometrically and radiometrically to produce a georeferenced mosaic in which the standard Reflectance Factor for the nadir is represented. The plants crowns in permanent cultivation show complex variations since the density of shadows and the irradiance of the surface vary due to the geometry of illumination and the geometry of the arrangement of branches and leaves. An evaluation of the radiometric quality of the mosaic of an orange plantation produced using images captured by a hyperspectral imager based on a tunable Fabry-Pérot interferometer and applying the radiometric block adjustment method, was performed. A high-resolution UAV based hyperspectral survey was carried out in an orange-producing farm located in Santa Cruz do Rio Pardo, state of São Paulo, Brazil. A set of 25 narrow spectral bands with 2.5 cm of GSD images were acquired. Trend analysis was applied to the values of a sample of transects extracted from plants appearing in the mosaic. The results of these trend analysis on the pixels distributed along transects on orange tree crown showed the reflectance factor presented a slightly trend, but the coefficients of the polynomials are very small, so the quality of mosaic is good enough for many applications.

  4. Segmentation of fluorescence microscopy images for quantitative analysis of cell nuclear architecture.

    PubMed

    Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S

    2009-04-22

    Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments.

  5. Segmentation of Fluorescence Microscopy Images for Quantitative Analysis of Cell Nuclear Architecture

    PubMed Central

    Russell, Richard A.; Adams, Niall M.; Stephens, David A.; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S.

    2009-01-01

    Abstract Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments. PMID:19383481

  6. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  7. Geospatial mapping of Antarctic coastal oasis using geographic object-based image analysis and high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-04-01

    An accurate spatial mapping and characterization of land cover features in cryospheric regions is an essential procedure for many geoscientific studies. A novel semi-automated method was devised by coupling spectral index ratios (SIRs) and geographic object-based image analysis (OBIA) to extract cryospheric geospatial information from very high resolution WorldView 2 (WV-2) satellite imagery. The present study addresses development of multiple rule sets for OBIA-based classification of WV-2 imagery to accurately extract land cover features in the Larsemann Hills, east Antarctica. Multilevel segmentation process was applied to WV-2 image to generate different sizes of geographic image objects corresponding to various land cover features with respect to scale parameter. Several SIRs were applied to geographic objects at different segmentation levels to classify land mass, man-made features, snow/ice, and water bodies. We focus on water body class to identify water areas at the image level, considering their uneven appearance on landmass and ice. The results illustrated that synergetic usage of SIRs and OBIA can provide accurate means to identify land cover classes with an overall classification accuracy of ≍97%. In conclusion, our results suggest that OBIA is a powerful tool for carrying out automatic and semiautomatic analysis for most cryospheric remote-sensing applications, and the synergetic coupling with pixel-based SIRs is found to be a superior method for mining geospatial information.

  8. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    PubMed

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A Critical Analysis of USAir's Image Repair Discourse.

    ERIC Educational Resources Information Center

    Benoit, William L.; Czerwinski, Anne

    1997-01-01

    Applies the theory of image restoration to a case study of USAir's response to media coverage of a 1994 crash. Argues that introducing such case studies in the classroom helps students to understand the basic tenets of persuasion in the highly charged context of repairing a corporate reputation after an attack. (SR)

  10. Prediction of pH of fresh chicken breast fillets by VNIR hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Visible and near-infrared (VNIR) hyperspectral imaging (400–900 nm) was used to evaluate pH of fresh chicken breast fillets (pectoralis major muscle) from the bone (dorsal) side of individual fillets. After the principal component analysis (PCA), a band threshold method was applied to the first prin...

  11. Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Voytik-Harbin, Sherry L.; Filmer, David L.; Hoffman, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennis; Robinson, Joseph P.

    2002-05-01

    Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to its structure. Understanding this fibrous structure is very crucial in tissue engineering to develop the next generation of biomaterials for restoration of tissues and organs. In this paper, we integrate confocal microscopy imaging and image-processing techniques to analyze the structural properties of ECM. We describe a 2D fiber middle-line tracing algorithm and apply it via Euclidean distance maps (EDM) to extract accurate fibrous structure information, such as fiber diameter, length, orientation, and density, from single slices. Based on a 2D tracing algorithm, we extend our analysis to 3D tracing via Euclidean distance maps to extract 3D fibrous structure information. We use computer simulation to construct the 3D fibrous structure which is subsequently used to test our tracing algorithms. After further image processing, these models are then applied to a variety of ECM constructions from which results of 2D and 3D traces are statistically analyzed.

  12. Hello World Deep Learning in Medical Imaging.

    PubMed

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  13. A preliminary computer pattern analysis of satellite images of mature extratropical cyclones

    NASA Technical Reports Server (NTRS)

    Burfeind, Craig R.; Weinman, James A.; Barkstrom, Bruce R.

    1987-01-01

    This study has applied computerized pattern analysis techniques to the location and classification of features of several mature extratropical cyclones that were depicted in GOES satellite images. These features include the location of the center of the cyclone vortex core and the location of the associated occluded front. The cyclone type was classified in accord with the scheme of Troup and Streten. The present analysis was implemented on a personal computer; results were obtained within approximately one or two minutes without the intervention of an analyst.

  14. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.

  15. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  16. Bilateral filtering using the full noise covariance matrix applied to x-ray phase-contrast computed tomography.

    PubMed

    Allner, S; Koehler, T; Fehringer, A; Birnbacher, L; Willner, M; Pfeiffer, F; Noël, P B

    2016-05-21

    The purpose of this work is to develop an image-based de-noising algorithm that exploits complementary information and noise statistics from multi-modal images, as they emerge in x-ray tomography techniques, for instance grating-based phase-contrast CT and spectral CT. Among the noise reduction methods, image-based de-noising is one popular approach and the so-called bilateral filter is a well known algorithm for edge-preserving filtering. We developed a generalization of the bilateral filter for the case where the imaging system provides two or more perfectly aligned images. The proposed generalization is statistically motivated and takes the full second order noise statistics of these images into account. In particular, it includes a noise correlation between the images and spatial noise correlation within the same image. The novel generalized three-dimensional bilateral filter is applied to the attenuation and phase images created with filtered backprojection reconstructions from grating-based phase-contrast tomography. In comparison to established bilateral filters, we obtain improved noise reduction and at the same time a better preservation of edges in the images on the examples of a simulated soft-tissue phantom, a human cerebellum and a human artery sample. The applied full noise covariance is determined via cross-correlation of the image noise. The filter results yield an improved feature recovery based on enhanced noise suppression and edge preservation as shown here on the example of attenuation and phase images captured with grating-based phase-contrast computed tomography. This is supported by quantitative image analysis. Without being bound to phase-contrast imaging, this generalized filter is applicable to any kind of noise-afflicted image data with or without noise correlation. Therefore, it can be utilized in various imaging applications and fields.

  17. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  18. Spatially weighted mutual information image registration for image guided radiation therapy.

    PubMed

    Park, Samuel B; Rhee, Frank C; Monroe, James I; Sohn, Jason W

    2010-09-01

    To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically "important" areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/ MVCT image sets. The SWMI registration with a Gaussian weight function (SWMI-GW) was tested between two different imaging modalities: CT and MRI image sets. SWMI-GW converges 10% faster than registration using mutual information with an ROI. SWMI-GW as well as SWMI with SOI-based weight function (SWMI-SOI) shows better compensation of the target organ's deformation and neighboring critical organs' deformation. SWMI-GW was also used to successfully fuse MRI and CT images. Rigid-body image registration using our SWMI-GW and SWMI-SOI as cost functions can achieve better registration results in (a) designated image region(s) as well as faster convergence. With the theoretical foundation established, we believe SWMI could be extended to larger clinical testing.

  19. 3D Structure of Tillage Soils

    NASA Astrophysics Data System (ADS)

    González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.

    2015-04-01

    Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Houston, A.N.; S. Schmidt, A.M. Tarquis, W. Otten, P.C. Baveye, S.M. Hapca. Effect of scanning and image reconstruction settings in X-ray computed tomography on soil image quality and segmentation performance. Geoderma, 207-208, 154-165, 2013a. Houston, A, Otten, W., Baveye, Ph., Hapca, S. Adaptive-Window Indicator Kriging: A Thresholding Method for Computed Tomography, Computers & Geosciences, 54, 239-248, 2013b. Tarquis, A.M., R.J. Heck, D. Andina, A. Alvarez and J.M. Antón. Multifractal analysis and thresholding of 3D soil images. Ecological Complexity, 6, 230-239, 2009. Tarquis, A.M.; D. Giménez, A. Saa, M.C. Díaz. and J.M. Gascó. Scaling and Multiscaling of Soil Pore Systems Determined by Image Analysis. Scaling Methods in Soil Systems. Pachepsky, Radcliffe and Selim Eds., 19-33, 2003. CRC Press, Boca Ratón, Florida. Acknowledgements First author acknowledges the financial support obtained from Soil Imaging Laboratory (University of Gueplh, Canada) in 2014.

  20. Pet fur color and texture classification

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Mukherjee, Debarghar; Lim, SukHwan; Tretter, Daniel

    2007-01-01

    Object segmentation is important in image analysis for imaging tasks such as image rendering and image retrieval. Pet owners have been known to be quite vocal about how important it is to render their pets perfectly. We present here an algorithm for pet (mammal) fur color classification and an algorithm for pet (animal) fur texture classification. Per fur color classification can be applied as a necessary condition for identifying the regions in an image that may contain pets much like the skin tone classification for human flesh detection. As a result of the evolution, fur coloration of all mammals is caused by a natural organic pigment called Melanin and Melanin has only very limited color ranges. We have conducted a statistical analysis and concluded that mammal fur colors can be only in levels of gray or in two colors after the proper color quantization. This pet fur color classification algorithm has been applied for peteye detection. We also present here an algorithm for animal fur texture classification using the recently developed multi-resolution directional sub-band Contourlet transform. The experimental results are very promising as these transforms can identify regions of an image that may contain fur of mammals, scale of reptiles and feather of birds, etc. Combining the color and texture classification, one can have a set of strong classifiers for identifying possible animals in an image.

  1. A Hybrid Soft-computing Method for Image Analysis of Digital Plantar Scanners.

    PubMed

    Razjouyan, Javad; Khayat, Omid; Siahi, Mehdi; Mansouri, Ali Alizadeh

    2013-01-01

    Digital foot scanners have been developed in recent years to yield anthropometrists digital image of insole with pressure distribution and anthropometric information. In this paper, a hybrid algorithm containing gray level spatial correlation (GLSC) histogram and Shanbag entropy is presented for analysis of scanned foot images. An evolutionary algorithm is also employed to find the optimum parameters of GLSC and transform function of the membership values. Resulting binary images as the thresholded images are undergone anthropometric measurements taking in to account the scale factor of pixel size to metric scale. The proposed method is finally applied to plantar images obtained through scanning feet of randomly selected subjects by a foot scanner system as our experimental setup described in the paper. Running computation time and the effects of GLSC parameters are investigated in the simulation results.

  2. Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis

    PubMed Central

    Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L.; Hwang, Jae Youn

    2016-01-01

    We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis. PMID:28018743

  3. Automated boundary segmentation and wound analysis for longitudinal corneal OCT images

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Shi, Fei; Zhu, Weifang; Pan, Lingjiao; Chen, Haoyu; Huang, Haifan; Zheng, Kangkeng; Chen, Xinjian

    2017-03-01

    Optical coherence tomography (OCT) has been widely applied in the examination and diagnosis of corneal diseases, but the information directly achieved from the OCT images by manual inspection is limited. We propose an automatic processing method to assist ophthalmologists in locating the boundaries in corneal OCT images and analyzing the recovery of corneal wounds after treatment from longitudinal OCT images. It includes the following steps: preprocessing, epithelium and endothelium boundary segmentation and correction, wound detection, corneal boundary fitting and wound analysis. The method was tested on a data set with longitudinal corneal OCT images from 20 subjects. Each subject has five images acquired after corneal operation over a period of time. The segmentation and classification accuracy of the proposed algorithm is high and can be used for analyzing wound recovery after corneal surgery.

  4. Exploiting Fission Chain Reaction Dynamics to Image Fissile Materials

    NASA Astrophysics Data System (ADS)

    Chapman, Peter Henry

    Radiation imaging is one potential method to verify nuclear weapons dismantlement. The neutron coded aperture imager (NCAI), jointly developed by Oak Ridge National Laboratory (ORNL) and Sandia National Laboratories (SNL), is capable of imaging sources of fast (e.g., fission spectrum) neutrons using an array of organic scintillators. This work presents a method developed to discriminate between non-multiplying (i.e., non-fissile) neutron sources and multiplying (i.e., fissile) neutron sources using the NCAI. This method exploits the dynamics of fission chain-reactions; it applies time-correlated pulse-height (TCPH) analysis to identify neutrons in fission chain reactions. TCPH analyzes the neutron energy deposited in the organic scintillator vs. the apparent neutron time-of-flight. Energy deposition is estimated from light output, and time-of-flight is estimated from the time between the neutron interaction and the immediately preceding gamma interaction. Neutrons that deposit more energy than can be accounted for by their apparent time-of-flight are identified as fission chain-reaction neutrons, and the image is reconstructed using only these neutron detection events. This analysis was applied to measurements of weapons-grade plutonium (WGPu) metal and 252Cf performed at the Nevada National Security Site (NNSS) Device Assembly Facility (DAF) in July 2015. The results demonstrate it is possible to eliminate the non-fissile 252Cf source from the image while preserving the fissileWGPu source. TCPH analysis was also applied to additional scenes in which theWGPu and 252Cf sources were measured individually. The results of these separate measurements further demonstrate the ability to remove the non-fissile 252Cf source and retain the fissileWGPu source. Simulations performed using MCNPX-PoliMi indicate that in a one hour measurement, solid spheres ofWGPu are retained at a 1sigma level for neutron multiplications M -˜ 3.0 and above, while hollowWGPu spheres are retained for M -˜ 2.7 and above.

  5. High-content analysis of single cells directly assembled on CMOS sensor based on color imaging.

    PubMed

    Tanaka, Tsuyoshi; Saeki, Tatsuya; Sunaga, Yoshihiko; Matsunaga, Tadashi

    2010-12-15

    A complementary metal oxide semiconductor (CMOS) image sensor was applied to high-content analysis of single cells which were assembled closely or directly onto the CMOS sensor surface. The direct assembling of cell groups on CMOS sensor surface allows large-field (6.66 mm×5.32 mm in entire active area of CMOS sensor) imaging within a second. Trypan blue-stained and non-stained cells in the same field area on the CMOS sensor were successfully distinguished as white- and blue-colored images under white LED light irradiation. Furthermore, the chemiluminescent signals of each cell were successfully visualized as blue-colored images on CMOS sensor only when HeLa cells were placed directly on the micro-lens array of the CMOS sensor. Our proposed approach will be a promising technique for real-time and high-content analysis of single cells in a large-field area based on color imaging. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Principal component analysis for surface reflection components and structure in facial images and synthesis of facial images for various ages

    NASA Astrophysics Data System (ADS)

    Hirose, Misa; Toyota, Saori; Ojima, Nobutoshi; Ogawa-Ochiai, Keiko; Tsumura, Norimichi

    2017-08-01

    In this paper, principal component analysis is applied to the distribution of pigmentation, surface reflectance, and landmarks in whole facial images to obtain feature values. The relationship between the obtained feature vectors and the age of the face is then estimated by multiple regression analysis so that facial images can be modulated for woman aged 10-70. In a previous study, we analyzed only the distribution of pigmentation, and the reproduced images appeared to be younger than the apparent age of the initial images. We believe that this happened because we did not modulate the facial structures and detailed surfaces, such as wrinkles. By considering landmarks and surface reflectance over the entire face, we were able to analyze the variation in the distributions of facial structures and fine asperity, and pigmentation. As a result, our method is able to appropriately modulate the appearance of a face so that it appears to be the correct age.

  7. Imaging of breast cancer with mid- and long-wave infrared camera.

    PubMed

    Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R

    2008-01-01

    In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.

  8. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  9. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  10. Hyperspectral imaging applied to complex particulate solids systems

    NASA Astrophysics Data System (ADS)

    Bonifazi, Giuseppe; Serranti, Silvia

    2008-04-01

    HyperSpectral Imaging (HSI) is based on the utilization of an integrated hardware and software (HW&SW) platform embedding conventional imaging and spectroscopy to attain both spatial and spectral information from an object. Although HSI was originally developed for remote sensing, it has recently emerged as a powerful process analytical tool, for non-destructive analysis, in many research and industrial sectors. The possibility to apply on-line HSI based techniques in order to identify and quantify specific particulate solid systems characteristics is presented and critically evaluated. The originally developed HSI based logics can be profitably applied in order to develop fast, reliable and lowcost strategies for: i) quality control of particulate products that must comply with specific chemical, physical and biological constraints, ii) performance evaluation of manufacturing strategies related to processing chains and/or realtime tuning of operative variables and iii) classification-sorting actions addressed to recognize and separate different particulate solid products. Case studies, related to recent advances in the application of HSI to different industrial sectors, as agriculture, food, pharmaceuticals, solid waste handling and recycling, etc. and addressed to specific goals as contaminant detection, defect identification, constituent analysis and quality evaluation are described, according to authors' originally developed application.

  11. Bayesian Analysis Of HMI Solar Image Observables And Comparison To TSI Variations And MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.

    2014-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and Boyden, J. 2010, Solar Phys. , 261 , 11.

  12. Qualitative and quantitative interpretation of SEM image using digital image processing.

    PubMed

    Saladra, Dawid; Kopernik, Magdalena

    2016-10-01

    The aim of the this study is improvement of qualitative and quantitative analysis of scanning electron microscope micrographs by development of computer program, which enables automatic crack analysis of scanning electron microscopy (SEM) micrographs. Micromechanical tests of pneumatic ventricular assist devices result in a large number of micrographs. Therefore, the analysis must be automatic. Tests for athrombogenic titanium nitride/gold coatings deposited on polymeric substrates (Bionate II) are performed. These tests include microshear, microtension and fatigue analysis. Anisotropic surface defects observed in the SEM micrographs require support for qualitative and quantitative interpretation. Improvement of qualitative analysis of scanning electron microscope images was achieved by a set of computational tools that includes binarization, simplified expanding, expanding, simple image statistic thresholding, the filters Laplacian 1, and Laplacian 2, Otsu and reverse binarization. Several modifications of the known image processing techniques and combinations of the selected image processing techniques were applied. The introduced quantitative analysis of digital scanning electron microscope images enables computation of stereological parameters such as area, crack angle, crack length, and total crack length per unit area. This study also compares the functionality of the developed computer program of digital image processing with existing applications. The described pre- and postprocessing may be helpful in scanning electron microscopy and transmission electron microscopy surface investigations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. Malware Analysis Using Visualized Image Matrices

    PubMed Central

    Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  14. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  15. Pore network quantification of sandstones under experimental CO2 injection using image analysis

    NASA Astrophysics Data System (ADS)

    Berrezueta, Edgar; González-Menéndez, Luís; Ordóñez-Casado, Berta; Olaya, Peter

    2015-04-01

    Automated-image identification and quantification of minerals, pores and textures together with petrographic analysis can be applied to improve pore system characterization in sedimentary rocks. Our case study is focused on the application of these techniques to study the evolution of rock pore network subjected to super critical CO2-injection. We have proposed a Digital Image Analysis (DIA) protocol that guarantees measurement reproducibility and reliability. This can be summarized in the following stages: (i) detailed description of mineralogy and texture (before and after CO2-injection) by optical and scanning electron microscopy (SEM) techniques using thin sections; (ii) adjustment and calibration of DIA tools; (iii) data acquisition protocol based on image capture with different polarization conditions (synchronized movement of polarizers); (iv) study and quantification by DIA that allow (a) identification and isolation of pixels that belong to the same category: minerals vs. pores in each sample and (b) measurement of changes in pore network, after the samples have been exposed to new conditions (in our case: SC-CO2-injection). Finally, interpretation of the petrography and the measured data by an automated approach were done. In our applied study, the DIA results highlight the changes observed by SEM and microscopic techniques, which consisted in a porosity increase when CO2 treatment occurs. Other additional changes were minor: variations in the roughness and roundness of pore edges, and pore aspect ratio, shown in the bigger pore population. Additionally, statistic tests of pore parameters measured were applied to verify that the differences observed between samples before and after CO2-injection were significant.

  16. Polished sample preparing and backscattered electron imaging and of fly ash-cement paste

    NASA Astrophysics Data System (ADS)

    Feng, Shuxia; Li, Yanqi

    2018-03-01

    In recent decades, the technology of backscattered electron imaging and image analysis was applied in more and more study of mixed cement paste because of its special advantages. Test accuracy of this technology is affected by polished sample preparation and image acquisition. In our work, effects of two factors in polished sample preparing and backscattered electron imaging were investigated. The results showed that increasing smoothing pressure could improve the flatness of polished surface and then help to eliminate interference of morphology on grey level distribution of backscattered electron images; increasing accelerating voltage was beneficial to increase gray difference among different phases in backscattered electron images.

  17. Principal component analysis of TOF-SIMS spectra, images and depth profiles: an industrial perspective

    NASA Astrophysics Data System (ADS)

    Pacholski, Michaeleen L.

    2004-06-01

    Principal component analysis (PCA) has been successfully applied to time-of-flight secondary ion mass spectrometry (TOF-SIMS) spectra, images and depth profiles. Although SIMS spectral data sets can be small (in comparison to datasets typically discussed in literature from other analytical techniques such as gas or liquid chromatography), each spectrum has thousands of ions resulting in what can be a difficult comparison of samples. Analysis of industrially-derived samples means the identity of most surface species are unknown a priori and samples must be analyzed rapidly to satisfy customer demands. PCA enables rapid assessment of spectral differences (or lack there of) between samples and identification of chemically different areas on sample surfaces for images. Depth profile analysis helps define interfaces and identify low-level components in the system.

  18. Collagen morphology and texture analysis: from statistics to classification

    PubMed Central

    Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.

    2013-01-01

    In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580

  19. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.

  20. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  1. A hybrid correlation analysis with application to imaging genetics

    NASA Astrophysics Data System (ADS)

    Hu, Wenxing; Fang, Jian; Calhoun, Vince D.; Wang, Yu-Ping

    2018-03-01

    Investigating the association between brain regions and genes continues to be a challenging topic in imaging genetics. Current brain region of interest (ROI)-gene association studies normally reduce data dimension by averaging the value of voxels in each ROI. This averaging may lead to a loss of information due to the existence of functional sub-regions. Pearson correlation is widely used for association analysis. However, it only detects linear correlation whereas nonlinear correlation may exist among ROIs. In this work, we introduced distance correlation to ROI-gene association analysis, which can detect both linear and nonlinear correlations and overcome the limitation of averaging operations by taking advantage of the information at each voxel. Nevertheless, distance correlation usually has a much lower value than Pearson correlation. To address this problem, we proposed a hybrid correlation analysis approach, by applying canonical correlation analysis (CCA) to the distance covariance matrix instead of directly computing distance correlation. Incorporating CCA into distance correlation approach may be more suitable for complex disease study because it can detect highly associated pairs of ROI and gene groups, and may improve the distance correlation level and statistical power. In addition, we developed a novel nonlinear CCA, called distance kernel CCA, which seeks the optimal combination of features with the most significant dependence. This approach was applied to imaging genetic data from the Philadelphia Neurodevelopmental Cohort (PNC). Experiments showed that our hybrid approach produced more consistent results than conventional CCA across resampling and both the correlation and statistical significance were increased compared to distance correlation analysis. Further gene enrichment analysis and region of interest (ROI) analysis confirmed the associations of the identified genes with brain ROIs. Therefore, our approach provides a powerful tool for finding the correlation between brain imaging and genomic data.

  2. Invited Review Article: Imaging techniques for harmonic and multiphoton absorption fluorescence microscopy

    PubMed Central

    Carriles, Ramón; Schafer, Dawn N.; Sheetz, Kraig E.; Field, Jeffrey J.; Cisek, Richard; Barzda, Virginijus; Sylvester, Anne W.; Squier, Jeffrey A.

    2009-01-01

    We review the current state of multiphoton microscopy. In particular, the requirements and limitations associated with high-speed multiphoton imaging are considered. A description of the different scanning technologies such as line scan, multifoci approaches, multidepth microscopy, and novel detection techniques is given. The main nonlinear optical contrast mechanisms employed in microscopy are reviewed, namely, multiphoton excitation fluorescence, second harmonic generation, and third harmonic generation. Techniques for optimizing these nonlinear mechanisms through a careful measurement of the spatial and temporal characteristics of the focal volume are discussed, and a brief summary of photobleaching effects is provided. Finally, we consider three new applications of multiphoton microscopy: nonlinear imaging in microfluidics as applied to chemical analysis and the use of two-photon absorption and self-phase modulation as contrast mechanisms applied to imaging problems in the medical sciences. PMID:19725639

  3. Application of deep learning to the classification of images from colposcopy.

    PubMed

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-03-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.

  4. Application of deep learning to the classification of images from colposcopy

    PubMed Central

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-01-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images. PMID:29456725

  5. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U.S.S.R.

    USGS Publications Warehouse

    Sadowski, Franklin G.; Covington, Steven J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.

  6. How to Perform a Systematic Review and Meta-analysis of Diagnostic Imaging Studies.

    PubMed

    Cronin, Paul; Kelly, Aine Marie; Altaee, Duaa; Foerster, Bradley; Petrou, Myria; Dwamena, Ben A

    2018-05-01

    A systematic review is a comprehensive search, critical evaluation, and synthesis of all the relevant studies on a specific (clinical) topic that can be applied to the evaluation of diagnostic and screening imaging studies. It can be a qualitative or a quantitative (meta-analysis) review of available literature. A meta-analysis uses statistical methods to combine and summarize the results of several studies. In this review, a 12-step approach to performing a systematic review (and meta-analysis) is outlined under the four domains: (1) Problem Formulation and Data Acquisition, (2) Quality Appraisal of Eligible Studies, (3) Statistical Analysis of Quantitative Data, and (4) Clinical Interpretation of the Evidence. This review is specifically geared toward the performance of a systematic review and meta-analysis of diagnostic test accuracy (imaging) studies. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  7. Feature-space-based FMRI analysis using the optimal linear transformation.

    PubMed

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  8. Microvascular Autonomic Composites

    DTIC Science & Technology

    2012-01-06

    thermogravimetric analysis (TGA) was employed. The double wall allowed for increased thermal stability of the microcapsules, which was...fluorescent nanoparticles (Berfield et al. 2006). Digital Image Correlation (DIC) is a data analysis method, which applies a mathematical...Theme IV: Experimental Assessment & Analysis 2.4.1 Optical diagnostics for complex microfluidic systems pg. 50 2.4.2 Fluorescent thermometry

  9. Unsupervised learning toward brain imaging data analysis: cigarette craving and resistance related neuronal activations from functional magnetic resonance imaging data analysis

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Youl; Lee, Jong-Hwan

    2014-05-01

    A data-driven unsupervised learning such as an independent component analysis was gainfully applied to bloodoxygenation- level-dependent (BOLD) functional magnetic resonance imaging (fMRI) data compared to a model-based general linear model (GLM). This is due to an ability of this unsupervised learning method to extract a meaningful neuronal activity from BOLD signal that is a mixture of confounding non-neuronal artifacts such as head motions and physiological artifacts as well as neuronal signals. In this study, we support this claim by identifying neuronal underpinnings of cigarette craving and cigarette resistance. The fMRI data were acquired from heavy cigarette smokers (n = 14) while they alternatively watched images with and without cigarette smoking. During acquisition of two fMRI runs, they were asked to crave when they watched cigarette smoking images or to resist the urge to smoke. Data driven approaches of group independent component analysis (GICA) method based on temporal concatenation (TC) and TCGICA with an extension of iterative dual-regression (TC-GICA-iDR) were applied to the data. From the results, cigarette craving and cigarette resistance related neuronal activations were identified in the visual area and superior frontal areas, respectively with a greater statistical significance from the TC-GICA-iDR method than the TC-GICA method. On the other hand, the neuronal activity levels in many of these regions were not statistically different from the GLM method between the cigarette craving and cigarette resistance due to potentially aberrant BOLD signals.

  10. The Assessment of Neurological Systems with Functional Imaging

    ERIC Educational Resources Information Center

    Eidelberg, David

    2007-01-01

    In recent years a number of multivariate approaches have been introduced to map neural systems in health and disease. In this review, we focus on spatial covariance methods applied to functional imaging data to identify patterns of regional activity associated with behavior. In the rest state, this form of network analysis can be used to detect…

  11. Correlation between Gray/White Matter Volume and Cognition in Healthy Elderly People

    ERIC Educational Resources Information Center

    Taki, Yasuyuki; Kinomura, Shigeo; Sato, Kazunori; Goto, Ryoi; Wu, Kai; Kawashima, Ryuta; Fukuda, Hiroshi

    2011-01-01

    This study applied volumetric analysis and voxel-based morphometry (VBM) of brain magnetic resonance (MR) images to assess whether correlations exist between global and regional gray/white matter volume and the cognitive functions of semantic memory and short-term memory, which are relatively well preserved with aging, using MR image data from 109…

  12. Cell tracking for cell image analysis

    NASA Astrophysics Data System (ADS)

    Bise, Ryoma; Sato, Yoichi

    2017-04-01

    Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.

  13. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U. S. S. R

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadowski, F.G.; Covington, S.J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT high-resolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear power plant emergency at Chernobyl in the Soviet Ukraine. The results of the data processing and analysis illustrate the spectral and spatial capabilities of the two sensor systems and provide information about the severity and duration of the events occurring at the power plant site.

  14. Correcting for surface topography in X-ray fluorescence imaging

    PubMed Central

    Geil, E. C.; Thorne, R. E.

    2014-01-01

    Samples with non-planar surfaces present challenges for X-ray fluorescence imaging analysis. Here, approximations are derived to describe the modulation of fluorescence signals by surface angles and topography, and suggestions are made for reducing this effect. A correction procedure is developed that is effective for trace element analysis of samples having a uniform matrix, and requires only a fluorescence map from a single detector. This procedure is applied to fluorescence maps from an incised gypsum tablet. PMID:25343805

  15. Processing Cones: A Computational Structure for Image Analysis.

    DTIC Science & Technology

    1981-12-01

    image analysis applications, referred to as a processing cone, is described and sample algorithms are presented. A fundamental characteristic of the structure is its hierarchical organization into two-dimensional arrays of decreasing resolution. In this architecture, a protypical function is defined on a local window of data and applied uniformly to all windows in a parallel manner. Three basic modes of processing are supported in the cone: reduction operations (upward processing), horizontal operations (processing at a single level) and projection operations (downward

  16. Application of Photoshop and Scion Image analysis to quantification of signals in histochemistry, immunocytochemistry and hybridocytochemistry.

    PubMed

    Tolivia, Jorge; Navarro, Ana; del Valle, Eva; Perez, Cristina; Ordoñez, Cristina; Martínez, Eva

    2006-02-01

    To describe a simple method to achieve the differential selection and subsequent quantification of the strength signal using only one section. Several methods for performing quantitative histochemistry, immunocytochemistry or hybridocytochemistry, without use of specific commercial image analysis systems, rely on pixel-counting algorithms, which do not provide information on the amount of chromogen present in the section. Other techniques use complex algorithms to calculate the cumulative signal strength using two consecutive sections. To separate the chromogen signal we used the "Color range" option of the Adobe Photoshop program, which provides a specific file for a particular chromogen selection that could be applied on similar sections. The measurement of the chromogen signal strength of the specific staining is achieved with the Scion Image software program. The method described in this paper can also be applied to simultaneous detection of different signals on the same section or different parameters (area of particles, number of particles, etc.) when the "Analyze particles" tool of the Scion program is used.

  17. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties. Copyright 2010 Elsevier Inc. All rights reserved.

  18. Pixel-based skin segmentation in psoriasis images.

    PubMed

    George, Y; Aldeen, M; Garnavi, R

    2016-08-01

    In this paper, we present a detailed comparison study of skin segmentation methods for psoriasis images. Different techniques are modified and then applied to a set of psoriasis images acquired from the Royal Melbourne Hospital, Melbourne, Australia, with aim of finding the best technique suited for application to psoriasis images. We investigate the effect of different colour transformations on skin detection performance. In this respect, explicit skin thresholding is evaluated with three different decision boundaries (CbCr, HS and rgHSV). Histogram-based Bayesian classifier is applied to extract skin probability maps (SPMs) for different colour channels. This is then followed by using different approaches to find a binary skin map (SM) image from the SPMs. The approaches used include binary decision tree (DT) and Otsu's thresholding. Finally, a set of morphological operations are implemented to refine the resulted SM image. The paper provides detailed analysis and comparison of the performance of the Bayesian classifier in five different colour spaces (YCbCr, HSV, RGB, XYZ and CIELab). The results show that histogram-based Bayesian classifier is more effective than explicit thresholding, when applied to psoriasis images. It is also found that decision boundary CbCr outperforms HS and rgHSV. Another finding is that the SPMs of Cb, Cr, H and B-CIELab colour bands yield the best SMs for psoriasis images. In this study, we used a set of 100 psoriasis images for training and testing the presented methods. True Positive (TP) and True Negative (TN) are used as statistical evaluation measures.

  19. Automatic content-based analysis of georeferenced image data: Detection of Beggiatoa mats in seafloor video mosaics from the HÅkon Mosby Mud Volcano

    NASA Astrophysics Data System (ADS)

    Jerosch, K.; Lüdtke, A.; Schlüter, M.; Ioannidis, G. T.

    2007-02-01

    The combination of new underwater technology as remotely operating vehicles (ROVs), high-resolution video imagery, and software to compute georeferenced mosaics of the seafloor provides new opportunities for marine geological or biological studies and applications in offshore industry. Even during single surveys by ROVs or towed systems large amounts of images are compiled. While these underwater techniques are now well-engineered, there is still a lack of methods for the automatic analysis of the acquired image data. During ROV dives more than 4200 georeferenced video mosaics were compiled for the HÅkon Mosby Mud Volcano (HMMV). Mud volcanoes as HMMV are considered as significant source locations for methane characterised by unique chemoautotrophic communities as Beggiatoa mats. For the detection and quantification of the spatial distribution of Beggiatoa mats an automated image analysis technique was developed, which applies watershed transformation and relaxation-based labelling of pre-segmented regions. Comparison of the data derived by visual inspection of 2840 video images with the automated image analysis revealed similarities with a precision better than 90%. We consider this as a step towards a time-efficient and accurate analysis of seafloor images for computation of geochemical budgets and identification of habitats at the seafloor.

  20. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  1. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  2. Targeted and untargeted-metabolite profiling to track the compositional integrity of ginger during processing using digitally-enhanced HPTLC pattern recognition analysis.

    PubMed

    Ibrahim, Reham S; Fathy, Hoda

    2018-03-30

    Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.

  3. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  4. Developing the Quantitative Histopathology Image Ontology (QHIO): A case study using the hot spot detection problem.

    PubMed

    Gurcan, Metin N; Tomaszewski, John; Overton, James A; Doyle, Scott; Ruttenberg, Alan; Smith, Barry

    2017-02-01

    Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology - QHIO. We apply QHIO to breast cancer hot-spot detection with the goal of enhancing reliability of detection by promoting the sharing of data between image analysts. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Numerical simulation and analysis of accurate blood oxygenation measurement by using optical resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing

    2016-10-01

    Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.

  6. An automated wide-field time-gated optically sectioning fluorescence lifetime imaging multiwell plate reader for high-content analysis of protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Alibhai, Dominic; Kumar, Sunil; Kelly, Douglas; Warren, Sean; Alexandrov, Yuriy; Munro, Ian; McGinty, James; Talbot, Clifford; Murray, Edward J.; Stuhmeier, Frank; Neil, Mark A. A.; Dunsby, Chris; French, Paul M. W.

    2011-03-01

    We describe an optically-sectioned FLIM multiwell plate reader that combines Nipkow microscopy with wide-field time-gated FLIM, and its application to high content analysis of FRET. The system acquires sectioned FLIM images in <10 s/well, requiring only ~11 minutes to read a 96 well plate of live cells expressing fluorescent protein. It has been applied to study the formation of immature HIV virus like particles (VLPs) in live cells by monitoring Gag-Gag protein interactions using FLIM FRET of HIV-1 Gag transfected with CFP or YFP. VLP formation results in FRET between closely packed Gag proteins, as confirmed by our FLIM analysis that includes automatic image segmentation.

  7. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    PubMed

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  8. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  9. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  10. Dual-energy x-ray image decomposition by independent component analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang

    2001-09-01

    The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.

  11. Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Dehmeshki, Jamshid

    2014-04-01

    Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.

  12. Accuracy of a remote quantitative image analysis in the whole slide images.

    PubMed

    Słodkowska, Janina; Markiewicz, Tomasz; Grala, Bartłomiej; Kozłowski, Wojciech; Papierz, Wielisław; Pleskacz, Katarzyna; Murawski, Piotr

    2011-03-30

    The rationale for choosing a remote quantitative method supporting a diagnostic decision requires some empirical studies and knowledge on scenarios including valid telepathology standards. The tumours of the central nervous system [CNS] are graded on the base of the morphological features and the Ki-67 labelling Index [Ki-67 LI]. Various methods have been applied for Ki-67 LI estimation. Recently we have introduced the Computerized Analysis of Medical Images [CAMI] software for an automated Ki-67 LI counting in the digital images. Aims of our study was to explore the accuracy and reliability of a remote assessment of Ki-67 LI with CAMI software applied to the whole slide images [WSI]. The WSI representing CNS tumours: 18 meningiomas and 10 oligodendrogliomas were stored on the server of the Warsaw University of Technology. The digital copies of entire glass slides were created automatically by the Aperio ScanScope CS with objective 20x or 40x. Aperio's Image Scope software provided functionality for a remote viewing of WSI. The Ki-67 LI assessment was carried on within 2 out of 20 selected fields of view (objective 40x) representing the highest labelling areas in each WSI. The Ki-67 LI counting was performed by 3 various methods: 1) the manual reading in the light microscope - LM, 2) the automated counting with CAMI software on the digital images - DI , and 3) the remote quantitation on the WSIs - as WSI method. The quality of WSIs and technical efficiency of the on-line system were analysed. The comparative statistical analysis was performed for the results obtained by 3 methods of Ki-67 LI counting. The preliminary analysis showed that in 18% of WSI the results of Ki-67 LI differed from those obtained in other 2 methods of counting when the quality of the glass slides was below the standard range. The results of our investigations indicate that the remote automated Ki-67 LI analysis performed with the CAMI algorithm on the whole slide images of meningiomas and oligodendrogliomas could be successfully used as an alternative method to the manual reading as well as to the digital images quantitation with CAMI software. According to our observation a need of a remote supervision/consultation and training for the effective use of remote quantitative analysis of WSI is necessary.

  13. Reduction and analysis techniques for infrared imaging data

    NASA Technical Reports Server (NTRS)

    Mccaughrean, Mark

    1989-01-01

    Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.

  14. Analysis of Orientations of Collagen Fibers by Novel Fiber-Tracking Software

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Rajwa, Bartlomiej; Filmer, David L.; Hoffmann, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennie; Robinson, J. Paul

    2003-12-01

    Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to not only its composition but also its structure. This article integrates confocal microscopy imaging and image-processing techniques to analyze the microstructural properties of ECM. This report describes a two- and three-dimensional fiber middle-line tracing algorithm that may be used to quantify collagen fibril organization. We utilized computer simulation and statistical analysis to validate the developed algorithm. These algorithms were applied to confocal images of collagen gels made with reconstituted bovine collagen type I, to demonstrate the computation of orientations of individual fibers.

  15. Low b-value diffusion-weighted cardiac magnetic resonance imaging: initial results in humans using an optimal time-window imaging approach.

    PubMed

    Rapacchi, Stanislas; Wen, Han; Viallon, Magalie; Grenier, Denis; Kellman, Peter; Croisille, Pierre; Pai, Vinay M

    2011-12-01

    Diffusion-weighted imaging (DWI) using low b-values permits imaging of intravoxel incoherent motion in tissues. However, low b-value DWI of the human heart has been considered too challenging because of additional signal loss due to physiological motion, which reduces both signal intensity and the signal-to-noise ratio (SNR). We address these signal loss concerns by analyzing cardiac motion during a heartbeat to determine the time-window during which cardiac bulk motion is minimal. Using this information to optimize the acquisition of DWI data and combining it with a dedicated image processing approach has enabled us to develop a novel low b-value diffusion-weighted cardiac magnetic resonance imaging approach, which significantly reduces intravoxel incoherent motion measurement bias introduced by motion. Simulations from displacement encoded motion data sets permitted the delineation of an optimal time-window with minimal cardiac motion. A number of single-shot repetitions of low b-value DWI cardiac magnetic resonance imaging data were acquired during this time-window under free-breathing conditions with bulk physiological motion corrected for by using nonrigid registration. Principal component analysis (PCA) was performed on the registered images to improve the SNR, and temporal maximum intensity projection (TMIP) was applied to recover signal intensity from time-fluctuant motion-induced signal loss. This PCATMIP method was validated with experimental data, and its benefits were evaluated in volunteers before being applied to patients. Optimal time-window cardiac DWI in combination with PCATMIP postprocessing yielded significant benefits for signal recovery, contrast-to-noise ratio, and SNR in the presence of bulk motion for both numerical simulations and human volunteer studies. Analysis of mean apparent diffusion coefficient (ADC) maps showed homogeneous values among volunteers and good reproducibility between free-breathing and breath-hold acquisitions. The PCATMIP DWI approach also indicated its potential utility by detecting ADC variations in acute myocardial infarction patients. Studying cardiac motion may provide an appropriate strategy for minimizing the impact of bulk motion on cardiac DWI. Applying PCATMIP image processing improves low b-value DWI and enables reliable analysis of ADC in the myocardium. The use of a limited number of repetitions in a free-breathing mode also enables easier application in clinical conditions.

  16. Feasibility study of a novel miniaturized spectral imaging system architecture in UAV surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Shuyang; Zhou, Tao; Jia, Xiaodong; Cui, Hushan; Huang, Chengjun

    2016-01-01

    The spectral imaging technology is able to analysis the spectral and spatial geometric character of the target at the same time. To break through the limitation brought by the size, weight and cost of the traditional spectral imaging instrument, a miniaturized novel spectral imaging based on CMOS processing has been introduced in the market. This technology has enabled the possibility of applying spectral imaging in the UAV platform. In this paper, the relevant technology and the related possible applications have been presented to implement a quick, flexible and more detailed remote sensing system.

  17. Rock classification based on resistivity patterns in electrical borehole wall images

    NASA Astrophysics Data System (ADS)

    Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph

    2007-06-01

    Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.

  18. Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.

    PubMed

    Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B

    2015-09-01

    Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.

  19. Development of a control algorithm for the ultrasound scanning robot (NCCUSR) using ultrasound image and force feedback.

    PubMed

    Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi

    2017-06-01

    Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  1. Digital mammography--DQE versus optimized image quality in clinical environment: an on site study

    NASA Astrophysics Data System (ADS)

    Oberhofer, Nadia; Fracchetti, Alessandro; Springeth, Margareth; Moroder, Ehrenfried

    2010-04-01

    The intrinsic quality of the detection system of 7 different digital mammography units (5 direct radiography DR; 2 computed radiography CR), expressed by DQE, has been compared with their image quality/dose performances in clinical use. DQE measurements followed IEC 62220-1-2 using a tungsten test object for MTF determination. For image quality assessment two different methods have been applied: 1) measurement of contrast to noise ratio (CNR) according to the European guidelines and 2) contrast-detail (CD) evaluation. The latter was carried out with the phantom CDMAM ver. 3.4 and the commercial software CDMAM Analyser ver. 1.1 (both Artinis) for automated image analysis. The overall image quality index IQFinv proposed by the software has been validated. Correspondence between the two methods has been shown figuring out a linear correlation between CNR and IQFinv. All systems were optimized with respect to image quality and average glandular dose (AGD) within the constraints of automatic exposure control (AEC). For each equipment, a good image quality level was defined by means of CD analysis, and the corresponding CNR value considered as target value. The goal was to achieve for different PMMA-phantom thicknesses constant image quality, that means the CNR target value, at minimum dose. All DR systems exhibited higher DQE and significantly better image quality compared to CR systems. Generally switching, where available, to a target/filter combination with an x-ray spectrum of higher mean energy permitted dose savings at equal image quality. However, several systems did not allow to modify the AEC in order to apply optimal radiographic technique in clinical use. The best ratio image quality/dose was achieved by a unit with a-Se detector and W anode only recently available on the market.

  2. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    PubMed

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  4. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution

    PubMed Central

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170

  5. Component analysis and synthesis of dark circles under the eyes using a spectral image

    NASA Astrophysics Data System (ADS)

    Akaho, Rina; Hirose, Misa; Ojima, Nobutoshi; Igarashi, Takanori; Tsumura, Norimichi

    2017-02-01

    This paper proposes to apply nonlinear estimation of chromophore concentrations: melanin, oxy-hemoglobin, deoxyhemoglobin and shading to the real hyperspectral image of skin. Skin reflectance is captured in the wavelengths between 400nm and 700nm by hyperspectral scanner. Five-band wavelengths data are selected from skin reflectance. By using the cubic function which obtained by Monte Carlo simulation of light transport in multi-layered tissue, chromophore concentrations and shading are determined by minimize residual sum of squares of reflectance. When dark circles are appeared under the eyes, the subject looks tired and older. Therefore, woman apply cosmetic cares to remove dark circles. It is not clear about the relationship between color and chromophores distribution in the dark circles. Here, we applied the separation method of the skin four components to hyperspectral image of dark circle, and the separated components are modulated and synthesized. The synthesized images are evaluated to know which components are contributed into the appearance of dark circles. Result of the evaluation shows that the cause of dark circles for the one subject was mainly melanin pigmentation.

  6. Tomographic phase analysis to detect the site of accessory conduction pathway in Wolff-Parkinson-White syndrome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakajima, K.; Bunko, H.; Tada, A.

    1984-01-01

    Phase analysis has been applied to Wolff-Parkinson-White syndrome (WPW) to detect the site of accessory conduction pathway (ACP); however, there was a limitation to estimate the precise location of ACP by planar phase analysis. In this study, the authors applied phase analysis to gated blood pool tomography. Twelve patients with WPW who underwent epicardial mapping and surgical division of ACP were studied by both of gated emission computed tomography (GECT) and routine gated blood pool study (GBPS). The GBPS was performed with Tc-99m red blood cells in multiple projections; modified left anterior oblique, right anterior oblique and/or left lateral views.more » In GECT, short axial, horizontal and vertical long axial blood pool images were reconstructed. Phase analysis was performed using fundamental frequency of the Fourier transform in both GECT and GBPS images, and abnormal initial contractions on both the planar and tomographic phase analysis were compared with the location of surgically confirmed ACPs. In planar phase analysis, abnormal initial phase was identified in 7 out of 12 (58%) patients, while in tomographic phase analysis, the localization of ACP was predicted in 11 out of 12 (92%) patients. Tomographic phase analysis is superior to planar phase images in 8 out of 12 patients to estimate the location of ACP. Phase analysis by GECT can avoid overlap of blood pool in cardiac chambers and has advantage to identify the propagation of phase three-dimensionally. Tomographic phase analysis is a good adjunctive method for patients with WPW to estimate the site of ACP.« less

  7. Mapping Hydrothermal Alterations in the Muteh Gold Mining Area in Iran by using ASTER satellite Imagery data

    NASA Astrophysics Data System (ADS)

    Asadi Haroni, Hooshang; Hassan Tabatabaei, Seyed

    2016-04-01

    Muteh gold mining area is located in 160 km NW of Isfahan town. Gold mineralization is meso-thermal type and associated with silisic, seresitic and carbonate alterations as well as with hematite and goethite. Image processing and interpretation were applied on the ASTER satellite imagery data of about 400 km2 at the Muteh gold mining area to identify hydrothermal alterations and iron oxides associated with gold mineralization. After applying preprocessing methods such as radiometric and geometric corrections, image processing methods of Principal Components Analysis (PCA), Least Square Fit (Ls-Fit) and Spectral Angle Mapper (SAM) were applied on the ASTER data to identify hydrothermal alterations and iron oxides. In this research reference spectra of minerals such as chlorite, hematite, clay minerals and phengite identified from laboratory spectral analysis of collected samples were used to map the hydrothermal alterations. Finally, identified hydrothermal alteration and iron oxides were validated by visiting and sampling some of the mapped hydrothermal alterations.

  8. Optical imaging using spatial grating effects in ferrofluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dave, Vishakha; Virpura, Hiral; Patel, Rajesh, E-mail: rjp@mkbhavuni.edu.in

    2015-06-24

    Under the effect of magnetic field the magnetic nanoparticles of the ferrofluid tend to align in the direction of the magnetic field. This alignment of the magnetic nanoparticles behaves as a spatial grating and diffract light, when light is propagating perpendicular to the direction of the applied magnetic field. The chains of the magnetic nanoparticles represents a linear series of fringes like those observed in a grating/wire. Under applied magnetic field the circular beam of light transforms into a prominent diffraction line in the direction perpendicular to the applied magnetic field. This diffracted light illuminates larger area on the screen.more » This behavior can be used as magneto controlled illumination of the object and image analysis.« less

  9. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  10. Velocity landscape correlation resolves multiple flowing protein populations from fluorescence image time series.

    PubMed

    Pandžić, Elvis; Abu-Arish, Asmahan; Whan, Renee M; Hanrahan, John W; Wiseman, Paul W

    2018-02-16

    Molecular, vesicular and organellar flows are of fundamental importance for the delivery of nutrients and essential components used in cellular functions such as motility and division. With recent advances in fluorescence/super-resolution microscopy modalities we can resolve the movements of these objects at higher spatio-temporal resolutions and with better sensitivity. Previously, spatio-temporal image correlation spectroscopy has been applied to map molecular flows by correlation analysis of fluorescence fluctuations in image series. However, an underlying assumption of this approach is that the sampled time windows contain one dominant flowing component. Although this was true for most of the cases analyzed earlier, in some situations two or more different flowing populations can be present in the same spatio-temporal window. We introduce an approach, termed velocity landscape correlation (VLC), which detects and extracts multiple flow components present in a sampled image region via an extension of the correlation analysis of fluorescence intensity fluctuations. First we demonstrate theoretically how this approach works, test the performance of the method with a range of computer simulated image series with varying flow dynamics. Finally we apply VLC to study variable fluxing of STIM1 proteins on microtubules connected to the plasma membrane of Cystic Fibrosis Bronchial Epithelial (CFBE) cells. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  12. Using SAR Interferograms and Coherence Images for Object-Based Delineation of Unstable Slopes

    NASA Astrophysics Data System (ADS)

    Friedl, Barbara; Holbling, Daniel

    2015-05-01

    This study uses synthetic aperture radar (SAR) interferometric products for the semi-automated identification and delineation of unstable slopes and active landslides. Single-pair interferograms and coherence images are therefore segmented and classified in an object-based image analysis (OBIA) framework. The rule-based classification approach has been applied to landslide-prone areas located in Taiwan and Southern Germany. The semi-automatically obtained results were validated against landslide polygons derived from manual interpretation.

  13. Nuclear Forensics Applications of Principal Component Analysis on Micro X-ray Fluorescence Images

    DTIC Science & Technology

    analysis on quantified micro x-ray fluorescence intensity values. This method is then applied to address goals of nuclear forensics . Thefirst...researchers in the development and validation of nuclear forensics methods. A method for determining material homogeneity is developed and demonstrated

  14. Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets

    PubMed Central

    Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda

    2013-01-01

    Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626

  15. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  16. Matrix Approach of Seismic Wave Imaging: Application to Erebus Volcano

    NASA Astrophysics Data System (ADS)

    Blondel, T.; Chaput, J.; Derode, A.; Campillo, M.; Aubry, A.

    2017-12-01

    This work aims at extending to seismic imaging a matrix approach of wave propagation in heterogeneous media, previously developed in acoustics and optics. More specifically, we will apply this approach to the imaging of the Erebus volcano in Antarctica. Volcanoes are actually among the most challenging media to explore seismically in light of highly localized and abrupt variations in density and wave velocity, extreme topography, extensive fractures, and the presence of magma. In this strongly scattering regime, conventional imaging methods suffer from the multiple scattering of waves. Our approach experimentally relies on the measurement of a reflection matrix associated with an array of geophones located at the surface of the volcano. Although these sensors are purely passive, a set of Green's functions can be measured between all pairs of geophones from ice-quake coda cross-correlations (1-10 Hz) and forms the reflection matrix. A set of matrix operations can then be applied for imaging purposes. First, the reflection matrix is projected, at each time of flight, in the ballistic focal plane by applying adaptive focusing at emission and reception. It yields a response matrix associated with an array of virtual geophones located at the ballistic depth. This basis allows us to get rid of most of the multiple scattering contribution by applying a confocal filter to seismic data. Iterative time reversal is then applied to detect and image the strongest scatterers. Mathematically, it consists in performing a singular value decomposition of the reflection matrix. The presence of a potential target is assessed from a statistical analysis of the singular values, while the corresponding eigenvectors yield the corresponding target images. When stacked, the results obtained at each depth give a three-dimensional image of the volcano. While conventional imaging methods lead to a speckle image with no connection to the actual medium's reflectivity, our method enables to highlight a chimney-shaped structure inside Erebus volcano with true positive rates ranging from 80% to 95%. Although computed independently, the results at each depth are spatially consistent, substantiating their physical reliability. The identified structure is therefore likely to describe accurately the internal structure of the Erebus volcano.

  17. Integrating legacy tools and data sources

    DOT National Transportation Integrated Search

    1999-01-01

    Under DARPA and internal funding, Lockheed Martin has been researching information needs profiling to manage information dissemination as applied to logistics, image analysis and exploitation, and battlefield information management. We have demonstra...

  18. Statistical analysis of gravity waves characteristics observed by airglow imaging at Syowa Station (69S, 39E), Antarctica

    NASA Astrophysics Data System (ADS)

    Matsuda, Takashi S.; Nakamura, Takuji; Shiokawa, Kazuo; Tsutsumi, Masaki; Suzuki, Hidehiko; Ejiri, Mitsumu K.; Taguchi, Makoto

    Atmospheric gravity waves (AGWs), which are generated in the lower atmosphere, transport significant amount of energy and momentum into the mesosphere and lower thermosphere and cause the mean wind accelerations in the mesosphere. This momentum deposit drives the general circulation and affects the temperature structure. Among many parameters to characterize AGWs, horizontal phase velocity is very important to discuss the vertical propagation. Airglow imaging is a useful technique for investigating the horizontal structures of AGWs at around 90 km altitude. Recently, there are many reports about statistical characteristics of AGWs observed by airglow imaging. However, comparison of these results obtained at various locations is difficult because each research group uses its own method for extracting and analyzing AGW events. We have developed a new statistical analysis method for obtaining the power spectrum in the horizontal phase velocity domain from airglow image data, so as to deal with huge amounts of imaging data obtained on different years and at various observation sites, without bias caused by different event extraction criteria for the observer. This method was applied to the data obtained at Syowa Station, Antarctica, in 2011 and compared with a conventional event analysis in which the phase fronts were traced manually in order to estimate horizontal characteristics. This comparison shows that our new method is adequate to deriving the horizontal phase velocity characteristics of AGWs observed by airglow imaging technique. We plan to apply this method to airglow imaging data observed at Syowa Station in 2002 and between 2008 and 2013, and also to the data observed at other stations in Antarctica (e.g. Rothera Station (67S, 68W) and Halley Station (75S, 26W)), in order to investigate the behavior of AGWs propagation direction and source distribution in the MLT region over Antarctica. In this presentation, we will report interim analysis result of the data at Syowa Station.

  19. Measurement of two-dimensional thickness of micro-patterned thin film based on image restoration in a spectroscopic imaging reflectometer.

    PubMed

    Kim, Min-Gab; Kim, Jin-Yong

    2018-05-01

    In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.

  20. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  1. Fast Image Texture Classification Using Decision Trees

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  2. Ocean wavenumber estimation from wave-resolving time series imagery

    USGS Publications Warehouse

    Plant, N.G.; Holland, K.T.; Haller, M.C.

    2008-01-01

    We review several approaches that have been used to estimate ocean surface gravity wavenumbers from wave-resolving remotely sensed image sequences. Two fundamentally different approaches that utilize these data exist. A power spectral density approach identifies wavenumbers where image intensity variance is maximized. Alternatively, a cross-spectral correlation approach identifies wavenumbers where intensity coherence is maximized. We develop a solution to the latter approach based on a tomographic analysis that utilizes a nonlinear inverse method. The solution is tolerant to noise and other forms of sampling deficiency and can be applied to arbitrary sampling patterns, as well as to full-frame imagery. The solution includes error predictions that can be used for data retrieval quality control and for evaluating sample designs. A quantitative analysis of the intrinsic resolution of the method indicates that the cross-spectral correlation fitting improves resolution by a factor of about ten times as compared to the power spectral density fitting approach. The resolution analysis also provides a rule of thumb for nearshore bathymetry retrievals-short-scale cross-shore patterns may be resolved if they are about ten times longer than the average water depth over the pattern. This guidance can be applied to sample design to constrain both the sensor array (image resolution) and the analysis array (tomographic resolution). ?? 2008 IEEE.

  3. Polarization sensitive spectroscopic optical coherence tomography for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Strąkowski, Marcin R.; Kraszewski, Maciej; Strąkowska, Paulina; Trojanowski, Michał

    2015-03-01

    Optical coherence tomography (OCT) is a non-invasive method for 3D and cross-sectional imaging of biological and non-biological objects. The OCT measurements are provided in non-contact and absolutely safe way for the tested sample. Nowadays, the OCT is widely applied in medical diagnosis especially in ophthalmology, as well as dermatology, oncology and many more. Despite of great progress in OCT measurements there are still a vast number of issues like tissue recognition or imaging contrast enhancement that have not been solved yet. Here we are going to present the polarization sensitive spectroscopic OCT system (PS-SOCT). The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. Unlike standard polarization sensitive OCT the PS-SOCT delivers spectral information about measured quantities e.g. tested object birefringence changes over the light spectra. This solution overcomes the limits of polarization sensitive analysis applied in standard PS-OCT. Based on spectral data obtained from PS-SOCT the exact value of birefringence can be calculated even for the objects that provide higher order of retardation. In this contribution the benefits of using the combination of time-frequency and polarization sensitive analysis are being expressed. Moreover, the PS-SOCT system features, as well as OCT measurement examples are presented.

  4. Protection performance evaluation regarding imaging sensors hardened against laser dazzling

    NASA Astrophysics Data System (ADS)

    Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd

    2015-05-01

    Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.

  5. Phase Diversity Applied to Sunspot Observations

    NASA Astrophysics Data System (ADS)

    Tritschler, A.; Schmidt, W.; Knolker, M.

    We present preliminary results of a multi-colour phase diversity experiment carried out with the Multichannel Filter System of the Vacuum Tower Telescope at the Observatorio del Teide on Tenerife. We apply phase-diversity imaging to a time sequence of sunspot filtergrams taken in three continuum bands and correct the seeing influence for each image. A newly developed phase diversity device allowing for the projection of both the focused and the defocused image onto a single CCD chip was used in one of the wavelength channels. With the information about the wavefront obtained by the image reconstruction algorithm the restoration of the other two bands can be performed as well. The processed and restored data set will then be used to derive the temperature and proper motion of the umbral dots. Data analysis is still under way, and final results will be given in a forthcoming article.

  6. Imaging and elemental mapping of biological specimens with a dual-EDS dedicated scanning transmission electron microscope

    PubMed Central

    Wu, J.S.; Kim, A. M.; Bleher, R.; Myers, B.D.; Marvin, R. G.; Inada, H.; Nakamura, K.; Zhang, X.F.; Roth, E.; Li, S.Y.; Woodruff, T. K.; O'Halloran, T. V.; Dravid, Vinayak P.

    2013-01-01

    A dedicated analytical scanning transmission electron microscope (STEM) with dual energy dispersive spectroscopy (EDS) detectors has been designed for complementary high performance imaging as well as high sensitivity elemental analysis and mapping of biological structures. The performance of this new design, based on a Hitachi HD-2300A model, was evaluated using a variety of biological specimens. With three imaging detectors, both the surface and internal structure of cells can be examined simultaneously. The whole-cell elemental mapping, especially of heavier metal species that have low cross-section for electron energy loss spectroscopy (EELS), can be faithfully obtained. Optimization of STEM imaging conditions is applied to thick sections as well as thin sections of biological cells under low-dose conditions at room- and cryogenic temperatures. Such multimodal capabilities applied to soft/biological structures usher a new era for analytical studies in biological systems. PMID:23500508

  7. Single quantum dot tracking reveals the impact of nanoparticle surface on intracellular state.

    PubMed

    Zahid, Mohammad U; Ma, Liang; Lim, Sung Jun; Smith, Andrew M

    2018-05-08

    Inefficient delivery of macromolecules and nanoparticles to intracellular targets is a major bottleneck in drug delivery, genetic engineering, and molecular imaging. Here we apply live-cell single-quantum-dot imaging and tracking to analyze and classify nanoparticle states after intracellular delivery. By merging trajectory diffusion parameters with brightness measurements, multidimensional analysis reveals distinct and heterogeneous populations that are indistinguishable using single parameters alone. We derive new quantitative metrics of particle loading, cluster distribution, and vesicular release in single cells, and evaluate intracellular nanoparticles with diverse surfaces following osmotic delivery. Surface properties have a major impact on cell uptake, but little impact on the absolute cytoplasmic numbers. A key outcome is that stable zwitterionic surfaces yield uniform cytosolic behavior, ideal for imaging agents. We anticipate that this combination of quantum dots and single-particle tracking can be widely applied to design and optimize next-generation imaging probes, nanoparticle therapeutics, and biologics.

  8. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  9. On-line prediction of yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score using the MARC beef carcass image analysis system.

    PubMed

    Shackelford, S D; Wheeler, T L; Koohmaraie, M

    2003-01-01

    The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.

  10. Police witness identification images: a geometric morphometric analysis.

    PubMed

    Hayes, Susan; Tullberg, Cameron

    2012-11-01

    Research into witness identification images typically occurs within the laboratory and involves subjective likeness and recognizability judgments. This study analyzed whether actual witness identification images systematically alter the facial shapes of the suspects described. The shape analysis tool, geometric morphometrics, was applied to 46 homologous facial landmarks displayed on 50 witness identification images and their corresponding arrest photographs, using principal component analysis and multivariate regressions. The results indicate that compared with arrest photographs, witness identification images systematically depict suspects with lowered and medially located eyebrows (p = <0.000001). This was found to occur independently of the Police Artist, and did not occur with composites produced under laboratory conditions. There are several possible explanations for this finding, including any, or all, of the following: The suspect was frowning at the time of the incident, the witness had negative feelings toward the suspect, this is an effect of unfamiliar face processing, the suspect displayed fear at the time of their arrest photograph. © 2012 American Academy of Forensic Sciences.

  11. Interpolation of longitudinal shape and image data via optimal mass transport

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Zhu, Liang-Jia; Bouix, Sylvain; Tannenbaum, Allen

    2014-03-01

    Longitudinal analysis of medical imaging data has become central to the study of many disorders. Unfortunately, various constraints (study design, patient availability, technological limitations) restrict the acquisition of data to only a few time points, limiting the study of continuous disease/treatment progression. Having the ability to produce a sensible time interpolation of the data can lead to improved analysis, such as intuitive visualizations of anatomical changes, or the creation of more samples to improve statistical analysis. In this work, we model interpolation of medical image data, in particular shape data, using the theory of optimal mass transport (OMT), which can construct a continuous transition from two time points while preserving "mass" (e.g., image intensity, shape volume) during the transition. The theory even allows a short extrapolation in time and may help predict short-term treatment impact or disease progression on anatomical structure. We apply the proposed method to the hippocampus-amygdala complex in schizophrenia, the heart in atrial fibrillation, and full head MR images in traumatic brain injury.

  12. A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.

    Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less

  13. A method for rapid quantitative assessment of biofilms with biomolecular staining and image analysis

    DOE PAGES

    Larimer, Curtis J.; Winder, Eric M.; Jeters, Robert T.; ...

    2015-12-07

    Here, the accumulation of bacteria in surface attached biofilms, or biofouling, can be detrimental to human health, dental hygiene, and many industrial processes. A critical need in identifying and preventing the deleterious effects of biofilms is the ability to observe and quantify their development. Analytical methods capable of assessing early stage fouling are cumbersome or lab-confined, subjective, and qualitative. Herein, a novel photographic method is described that uses biomolecular staining and image analysis to enhance contrast of early stage biofouling. A robust algorithm was developed to objectively and quantitatively measure surface accumulation of Pseudomonas putida from photographs and results weremore » compared to independent measurements of cell density. Results from image analysis quantified biofilm growth intensity accurately and with approximately the same precision of the more laborious cell counting method. This simple method for early stage biofilm detection enables quantifiable measurement of surface fouling and is flexible enough to be applied from the laboratory to the field. Broad spectrum staining highlights fouling biomass, photography quickly captures a large area of interest, and image analysis rapidly quantifies fouling in the image.« less

  14. A method based on IHS cylindrical transform model for quality assessment of image fusion

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaokun; Jia, Yonghong

    2005-10-01

    Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.

  15. Integrated thermal disturbance analysis of optical system of astronomical telescope

    NASA Astrophysics Data System (ADS)

    Yang, Dehua; Jiang, Zibo; Li, Xinnan

    2008-07-01

    During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.

  16. Application of the angle measure technique as image texture analysis method for the identification of uranium ore concentrate samples: New perspective in nuclear forensics.

    PubMed

    Fongaro, Lorenzo; Ho, Doris Mer Lin; Kvaal, Knut; Mayer, Klaus; Rondinella, Vincenzo V

    2016-05-15

    The identification of interdicted nuclear or radioactive materials requires the application of dedicated techniques. In this work, a new approach for characterizing powder of uranium ore concentrates (UOCs) is presented. It is based on image texture analysis and multivariate data modelling. 26 different UOCs samples were evaluated applying the Angle Measure Technique (AMT) algorithm to extract textural features on samples images acquired at 250× and 1000× magnification by Scanning Electron Microscope (SEM). At both magnifications, this method proved effective to classify the different types of UOC powder based on the surface characteristics that depend on particle size, homogeneity, and graininess and are related to the composition and processes used in the production facilities. Using the outcome data from the application of the AMT algorithm, the total explained variance was higher than 90% with Principal Component Analysis (PCA), while partial least square discriminant analysis (PLS-DA) applied only on the 14 black colour UOCs powder samples, allowed their classification only on the basis of their surface texture features (sensitivity>0.6; specificity>0.6). This preliminary study shows that this method was able to distinguish samples with similar composition, but obtained from different facilities. The mean angle spectral data obtained by the image texture analysis using the AMT algorithm can be considered as a specific fingerprint or signature of UOCs and could be used for nuclear forensic investigation. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope.

    PubMed

    Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai

    2016-04-01

    We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Morphometric information to reduce the semantic gap in the characterization of microscopic images of thyroid nodules.

    PubMed

    Macedo, Alessandra A; Pessotti, Hugo C; Almansa, Luciana F; Felipe, Joaquim C; Kimura, Edna T

    2016-07-01

    The analyses of several systems for medical-imaging processing typically support the extraction of image attributes, but do not comprise some information that characterizes images. For example, morphometry can be applied to find new information about the visual content of an image. The extension of information may result in knowledge. Subsequently, results of mappings can be applied to recognize exam patterns, thus improving the accuracy of image retrieval and allowing a better interpretation of exam results. Although successfully applied in breast lesion images, the morphometric approach is still poorly explored in thyroid lesions due to the high subjectivity thyroid examinations. This paper presents a theoretical-practical study, considering Computer Aided Diagnosis (CAD) and Morphometry, to reduce the semantic discontinuity between medical image features and human interpretation of image content. The proposed method aggregates the content of microscopic images characterized by morphometric information and other image attributes extracted by traditional object extraction algorithms. This method carries out segmentation, feature extraction, image labeling and classification. Morphometric analysis was included as an object extraction method in order to verify the improvement of its accuracy for automatic classification of microscopic images. To validate this proposal and verify the utility of morphometric information to characterize thyroid images, a CAD system was created to classify real thyroid image-exams into Papillary Cancer, Goiter and Non-Cancer. Results showed that morphometric information can improve the accuracy and precision of image retrieval and the interpretation of results in computer-aided diagnosis. For example, in the scenario where all the extractors are combined with the morphometric information, the CAD system had its best performance (70% of precision in Papillary cases). Results signalized a positive use of morphometric information from images to reduce semantic discontinuity between human interpretation and image characterization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. The Influence of Organizational Image on College Selection: What Students Seek in Institutions of Higher Education

    ERIC Educational Resources Information Center

    Pampaloni, Andrea M.

    2010-01-01

    Colleges and universities rely on their image to attract new members. This study focuses on the decision-making process of students preparing to apply to college. High school students were surveyed at college open houses to identify the factors most influential to their college application decision-making. A multi-methods analysis found that…

  20. Image denoising and deblurring using multispectral data

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.

    2017-05-01

    Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.

  1. Exploiting spectral content for image segmentation in GPR data

    NASA Astrophysics Data System (ADS)

    Wang, Patrick K.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.

    2011-06-01

    Ground-penetrating radar (GPR) sensors provide an effective means for detecting changes in the sub-surface electrical properties of soils, such as changes indicative of landmines or other buried threats. However, most GPR-based pre-screening algorithms only localize target responses along the surface of the earth, and do not provide information regarding an object's position in depth. As a result, feature extraction algorithms are forced to process data from entire cubes of data around pre-screener alarms, which can reduce feature fidelity and hamper performance. In this work, spectral analysis is investigated as a method for locating subsurface anomalies in GPR data. In particular, a 2-D spatial/frequency decomposition is applied to pre-screener flagged GPR B-scans. Analysis of these spatial/frequency regions suggests that aspects (e.g. moments, maxima, mode) of the frequency distribution of GPR energy can be indicative of the presence of target responses. After translating a GPR image to a function of the spatial/frequency distributions at each pixel, several image segmentation approaches can be applied to perform segmentation in this new transformed feature space. To illustrate the efficacy of the approach, a performance comparison between feature processing with and without the image segmentation algorithm is provided.

  2. Application of image recognition algorithms for statistical description of nano- and microstructured surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mărăscu, V.; Dinescu, G.; Faculty of Physics, University of Bucharest, 405 Atomistilor Street, Bucharest-Magurele

    In this paper we propose a statistical approach for describing the self-assembling of sub-micronic polystyrene beads on silicon surfaces, as well as the evolution of surface topography due to plasma treatments. Algorithms for image recognition are used in conjunction with Scanning Electron Microscopy (SEM) imaging of surfaces. In a first step, greyscale images of the surface covered by the polystyrene beads are obtained. Further, an adaptive thresholding method was applied for obtaining binary images. The next step consisted in automatic identification of polystyrene beads dimensions, by using Hough transform algorithm, according to beads radius. In order to analyze the uniformitymore » of the self–assembled polystyrene beads, the squared modulus of 2-dimensional Fast Fourier Transform (2- D FFT) was applied. By combining these algorithms we obtain a powerful and fast statistical tool for analysis of micro and nanomaterials with aspect features regularly distributed on surface upon SEM examination.« less

  3. Image analysis of the blood cells for cytomorphodiagnostics and control of the effectiveness treatment

    NASA Astrophysics Data System (ADS)

    Zhukotsky, Alexander V.; Kogan, Emmanuil M.; Kopylov, Victor F.; Marchenko, Oleg V.; Lomakin, O. A.

    1994-07-01

    A new method for morphodensitometric analysis of blood cells was applied for medically screening some ecological influence and infection pathologies. A complex algorithm of computational image processing was created for supra molecular restructurings of interphase chromatin of lymphocytes research. It includes specific methods of staining and unifies different quantitative analysis methods. Our experience with the use of a television image analyzer in cytological and immunological studies made it possible to carry out some research in morphometric analysis of chromatin structure in interphase lymphocyte nuclei in genetic and virus pathologies. In our study to characterize lymphocytes as an image-forming system by a rigorous mathematical description we used an approach involving contaminant evaluation of the topography of chromatin network intact and victims' lymphocytes. It is also possible to digitize data, which revealed significant distinctions between control and experiment. The method allows us to observe the minute structural changes in chromatin, especially eu- and hetero-chromatin that were previously studied by genetics only in chromosomes.

  4. Image analysis technique as a tool to identify morphological changes in Trametes versicolor pellets according to exopolysaccharide or laccase production.

    PubMed

    Tavares, Ana P M; Silva, Rui P; Amaral, António L; Ferreira, Eugénio C; Xavier, Ana M R B

    2014-02-01

    Image analysis technique was applied to identify morphological changes of pellets from white-rot fungus Trametes versicolor on agitated submerged cultures during the production of exopolysaccharide (EPS) or ligninolytic enzymes. Batch tests with four different experimental conditions were carried out. Two different culture media were used, namely yeast medium or Trametes defined medium and the addition of lignolytic inducers as xylidine or pulp and paper industrial effluent were evaluated. Laccase activity, EPS production, and final biomass contents were determined for batch assays and the pellets morphology was assessed by image analysis techniques. The obtained data allowed establishing the choice of the metabolic pathways according to the experimental conditions, either for laccase enzymatic production in the Trametes defined medium, or for EPS production in the rich Yeast Medium experiments. Furthermore, the image processing and analysis methodology allowed for a better comprehension of the physiological phenomena with respect to the corresponding pellets morphological stages.

  5. Axial segmentation of lungs CT scan images using canny method and morphological operation

    NASA Astrophysics Data System (ADS)

    Noviana, Rina; Febriani, Rasal, Isram; Lubis, Eva Utari Cintamurni

    2017-08-01

    Segmentation is a very important topic in digital image process. It is found simply in varied fields of image analysis, particularly within the medical imaging field. Axial segmentation of lungs CT scan is beneficial in designation of abnormalities and surgery planning. It will do to ascertain every section within the lungs. The results of the segmentation are accustomed discover the presence of nodules. The method which utilized in this analysis are image cropping, image binarization, Canny edge detection and morphological operation. Image cropping is done so as to separate the lungs areas, that is the region of interest. Binarization method generates a binary image that has 2 values with grey level, that is black and white (ROI), from another space of lungs CT scan image. Canny method used for the edge detection. Morphological operation is applied to smoothing the lungs edge. The segmentation methodology shows an honest result. It obtains an awfully smooth edge. Moreover, the image background can also be removed in order to get the main focus, the lungs.

  6. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  7. Identifying image preferences based on demographic attributes

    NASA Astrophysics Data System (ADS)

    Fedorovskaya, Elena A.; Lawrence, Daniel R.

    2014-02-01

    The intent of this study is to determine what sorts of images are considered more interesting by which demographic groups. Specifically, we attempt to identify images whose interestingness ratings are influenced by the demographic attribute of the viewer's gender. To that end, we use the data from an experiment where 18 participants (9 women and 9 men) rated several hundred images based on "visual interest" or preferences in viewing images. The images were selected to represent the consumer "photo-space" - typical categories of subject matter found in consumer photo collections. They were annotated using perceptual and semantic descriptors. In analyzing the image interestingness ratings, we apply a multivariate procedure known as forced classification, a feature of dual scaling, a discrete analogue of principal components analysis (similar to correspondence analysis). This particular analysis of ratings (i.e., ordered-choice or Likert) data enables the investigator to emphasize the effect of a specific item or collection of items. We focus on the influence of the demographic item of gender on the analysis, so that the solutions are essentially confined to subspaces spanned by the emphasized item. Using this technique, we can know definitively which images' ratings have been influenced by the demographic item of choice. Subsequently, images can be evaluated and linked, on one hand, to their perceptual and semantic descriptors, and, on the other hand, to the preferences associated with viewers' demographic attributes.

  8. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  9. Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Ramapriyan, H. K.

    1989-01-01

    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis.

  10. Non-contrast-enhanced perfusion and ventilation assessment of the human lung by means of fourier decomposition in proton MRI.

    PubMed

    Bauman, Grzegorz; Puderbach, Michael; Deimling, Michael; Jellus, Vladimir; Chefd'hotel, Christophe; Dinkel, Julien; Hintze, Christian; Kauczor, Hans-Ulrich; Schad, Lothar R

    2009-09-01

    Assessment of regional lung perfusion and ventilation has significant clinical value for the diagnosis and follow-up of pulmonary diseases. In this work a new method of non-contrast-enhanced functional lung MRI (not dependent on intravenous or inhalative contrast agents) is proposed. A two-dimensional (2D) true fast imaging with steady precession (TrueFISP) pulse sequence (TR/TE = 1.9 ms/0.8 ms, acquisition time [TA] = 112 ms/image) was implemented on a 1.5T whole-body MR scanner. The imaging protocol comprised sets of 198 lung images acquired with an imaging rate of 3.33 images/s in coronal and sagittal view. No electrocardiogram (ECG) or respiratory triggering was used. A nonrigid image registration algorithm was applied to compensate for respiratory motion. Rapid data acquisition allowed observing intensity changes in corresponding lung areas with respect to the cardiac and respiratory frequencies. After a Fourier analysis along the time domain, two spectral lines corresponding to both frequencies were used to calculate the perfusion- and ventilation-weighted images. The described method was applied in preliminary studies on volunteers and patients showing clinical relevance to obtain non-contrast-enhanced perfusion and ventilation data.

  11. Invariant correlation to position and rotation using a binary mask applied to binary and gray images

    NASA Astrophysics Data System (ADS)

    Álvarez-Borrego, Josué; Solorza, Selene; Bueno-Ibarra, Mario A.

    2013-05-01

    In this paper more alternative ways to generate the binary ring masks are studied and a new methodology is presented when in the analysis the image come with some distortion due to rotation. This new algorithm requires low computational cost. Signature vectors of the target so like signature vectors of the object to be recognized in the problem image are obtained using a binary ring mask constructed in accordance with the real or the imaginary part of their Fourier transform analyzing two different conditions in each one. In this manner, each image target or problem image, will have four unique binary ring masks. The four ways are analyzed and the best is chosen. In addition, due to any image with rotation include some distortion, the best transect is chosen in the Fourier plane in order to obtain the best signature through the different ways to obtain the binary mask. This methodology is applied to two cases: to identify different types of alphabetic letters in Arial font and to identify different fossil diatoms images. Considering the great similarity between diatom images the results obtained are excellent.

  12. Integrated analyses in plastics forming

    NASA Astrophysics Data System (ADS)

    Bo, Wang

    This is the thesis which explains the progress made in the analysis, simulation and testing of plastics forming. This progress can be applied to injection and compression mould design. Three activities of plastics forming have been investigated, namely filling analysis, cooling analysis and ejecting analysis. The filling section of plastics forming has been analysed and calculated by using MOLDFLOW and FILLCALC V. software. A comparing of high speed compression moulding and injection moulding has been made. The cooling section of plastics forming has been analysed by using MOLDFLOW software and a finite difference computer program. The latter program can be used as a sample program to calculate the feasibility of cooling different materials to required target temperatures under controlled cooling conditions. The application of thermal imaging has been also introduced to determine the actual process temperatures. Thermal imaging can be used as a powerful tool to analyse mould surface temperatures and to verify the mathematical model. A buckling problem for ejecting section has been modelled and calculated by PATRAN/ABAQUS finite element analysis software and tested. These calculations and analysis are applied to the special case but can be use as an example for general analysis and calculation in the ejection section of plastics forming.

  13. An Automated Method of Scanning Probe Microscopy (SPM) Data Analysis and Reactive Site Tracking for Mineral-Water Interface Reactions Observed at the Nanometer Scale

    NASA Astrophysics Data System (ADS)

    Campbell, B. D.; Higgins, S. R.

    2008-12-01

    Developing a method for bridging the gap between macroscopic and microscopic measurements of reaction kinetics at the mineral-water interface has important implications in geological and chemical fields. Investigating these reactions on the nanometer scale with SPM is often limited by image analysis and data extraction due to the large quantity of data usually obtained in SPM experiments. Here we present a computer algorithm for automated analysis of mineral-water interface reactions. This algorithm automates the analysis of sequential SPM images by identifying the kinetically active surface sites (i.e., step edges), and by tracking the displacement of these sites from image to image. The step edge positions in each image are readily identified and tracked through time by a standard edge detection algorithm followed by statistical analysis on the Hough Transform of the edge-mapped image. By quantifying this displacement as a function of time, the rate of step edge displacement is determined. Furthermore, the total edge length, also determined from analysis of the Hough Transform, combined with the computed step speed, yields the surface area normalized rate of the reaction. The algorithm was applied to a study of the spiral growth of the calcite(104) surface from supersaturated solutions, yielding results almost 20 times faster than performing this analysis by hand, with results being statistically similar for both analysis methods. This advance in analysis of kinetic data from SPM images will facilitate the building of experimental databases on the microscopic kinetics of mineral-water interface reactions.

  14. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  15. Benchmarking the performance of fixed-image receptor digital radiographic systems part 1: a novel method for image quality analysis.

    PubMed

    Lee, Kam L; Ireland, Timothy A; Bernardo, Michael

    2016-06-01

    This is the first part of a two-part study in benchmarking the performance of fixed digital radiographic general X-ray systems. This paper concentrates on reporting findings related to quantitative analysis techniques used to establish comparative image quality metrics. A systematic technical comparison of the evaluated systems is presented in part two of this study. A novel quantitative image quality analysis method is presented with technical considerations addressed for peer review. The novel method was applied to seven general radiographic systems with four different makes of radiographic image receptor (12 image receptors in total). For the System Modulation Transfer Function (sMTF), the use of grid was found to reduce veiling glare and decrease roll-off. The major contributor in sMTF degradation was found to be focal spot blurring. For the System Normalised Noise Power Spectrum (sNNPS), it was found that all systems examined had similar sNNPS responses. A mathematical model is presented to explain how the use of stationary grid may cause a difference between horizontal and vertical sNNPS responses.

  16. Application of RNAMlet to surface defect identification of steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Xu, Yang; Zhou, Peng; Wang, Lei

    2018-06-01

    As three main production lines of steels, continuous casting slabs, hot rolled steel plates and cold rolled steel strips have different surface appearances and are produced at different speeds of their production lines. Therefore, the algorithms for the surface defect identifications of the three steel products have different requirements for real-time and anti-interference. The existing algorithms cannot be adaptively applied to surface defect identification of the three steel products. A new method of adaptive multi-scale geometric analysis named RNAMlet was proposed. The idea of RNAMlet came from the non-symmetry anti-packing pattern representation model (NAM). The image is decomposed into a set of rectangular blocks asymmetrically according to gray value changes of image pixels. Then two-dimensional Haar wavelet transform is applied to all blocks. If the image background is complex, the number of blocks is large, and more details of the image are utilized. If the image background is simple, the number of blocks is small, and less computation time is needed. RNAMlet was tested with image samples of the three steel products, and compared with three classical methods of multi-scale geometric analysis, including Contourlet, Shearlet and Tetrolet. For the image samples with complicated backgrounds, such as continuous casting slabs and hot rolled steel plates, the defect identification rate obtained by RNAMlet was 1% higher than other three methods. For the image samples with simple backgrounds, such as cold rolled steel strips, the computation time of RNAMlet was one-tenth of the other three MGA methods, while the defect identification rates obtained by RNAMlet were higher than the other three methods.

  17. ToF-SIMS measurements with topographic information in combined images.

    PubMed

    Koch, Sabrina; Ziegler, Georg; Hutter, Herbert

    2013-09-01

    In 2D and 3D time-of-flight secondary ion mass spectrometric (ToF-SIMS) analysis, accentuated structures on the sample surface induce distorted element distributions in the measurement. The origin of this effect is the 45° incidence angle of the analysis beam, recording planar images with distortion of the sample surface. For the generation of correct element distributions, these artifacts associated with the sample surface need to be eliminated by measuring the sample surface topography and applying suitable algorithms. For this purpose, the next generation of ToF-SIMS instruments will feature a scanning probe microscope directly implemented in the sample chamber which allows the performance of topography measurements in situ. This work presents the combination of 2D and 3D ToF-SIMS analysis with topographic measurements by ex situ techniques such as atomic force microscopy (AFM), confocal microscopy (CM), and digital holographic microscopy (DHM). The concept of the combination of topographic and ToF-SIMS measurements in a single representation was applied to organic and inorganic samples featuring surface structures in the nanometer and micrometer ranges. The correct representation of planar and distorted ToF-SIMS images was achieved by the combination of topographic data with images of 2D as well as 3D ToF-SIMS measurements, using either AFM, CM, or DHM for the recording of topographic data.

  18. Optic disc segmentation for glaucoma screening system using fundus images.

    PubMed

    Almazroa, Ahmed; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2017-01-01

    Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head pathologies such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of optic nerve head abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique was applied. As well an important contribution was to involve the variations in opinions among the ophthalmologists in detecting the disc boundaries and diagnosing the glaucoma. Most of the previous studies were trained and tested based on only one opinion, which can be assumed to be biased for the ophthalmologist. In addition, the accuracy was calculated based on the number of images that coincided with the ophthalmologists' agreed-upon images, and not only on the overlapping images as in previous studies. The ultimate goal of this project is to develop an automated image processing system for glaucoma screening. The disc algorithm is evaluated using a new retinal fundus image dataset called RIGA (retinal images for glaucoma analysis). In the case of low-quality images, a double level set was applied, in which the first level set was considered to be localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as the agreement among the manual markings of six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid was 83.9%, and the best agreement was observed between the results of the algorithm and manual markings in 379 images.

  19. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Abd-Elrahman, Amr

    2018-05-01

    Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.

  20. Disparity modifications and the emotional effects of stereoscopic images

    NASA Astrophysics Data System (ADS)

    Kawai, Takashi; Atsuta, Daiki; Tomiyama, Yuya; Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Häkkinen, Jukka

    2014-03-01

    This paper describes a study that focuses on disparity changes in emotional scenes of stereoscopic (3D) images, in which an examination of the effects on pleasant and arousal was carried out by adding binocular disparity to 2D images that evoke specific emotions, and applying disparity modification based on the disparity analysis of famous 3D movies. From the results of the experiment, for pleasant, a significant difference was found only for the main effect of the emotions. On the other hand, for arousal, there was a trend of increasing the evaluation values in the order 2D condition, 3D condition and 3D condition applied the disparity modification for happiness, surprise, and fear. This suggests the possibility that binocular disparity and the modification affect arousal.

  1. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research.

    PubMed

    Fedorov, Andriy; Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM(®)) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard.

  2. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI).

    PubMed

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-07-07

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.

  3. Processing Satellite Images on Tertiary Storage: A Study of the Impact of Tile Size on Performance

    NASA Technical Reports Server (NTRS)

    Yu, JieBing; DeWitt, David J.

    1996-01-01

    Before raw data from a satellite can be used by an Earth scientist, it must first undergo a number of processing steps including basic processing, cleansing, and geo-registration. Processing actually expands the volume of data collected by a factor of 2 or 3 and the original data is never deleted. Thus processing and storage requirements can exceed 2 terrabytes/day. Once processed data is ready for analysis, a series of algorithms (typically developed by the Earth scientists) is applied to a large number of images in a data set. The focus of this paper is how best to handle such images stored on tape using the following assumptions: (1) all images of interest to a scientist are stored on a single tape, (2) images are accessed and processed in the order that they are stored on tape, and (3) the analysis requires access to only a portion of each image and not the entire image.

  4. Near-infrared imaging spectroscopy for counterfeit drug detection

    NASA Astrophysics Data System (ADS)

    Arnold, Thomas; De Biasio, Martin; Leitner, Raimund

    2011-06-01

    Pharmaceutical counterfeiting is a significant issue in the healthcare community as well as for the pharmaceutical industry worldwide. The use of counterfeit medicines can result in treatment failure or even death. A rapid screening technique such as near infrared (NIR) spectroscopy could aid in the search for and identification of counterfeit drugs. This work presents a comparison of two laboratory NIR imaging systems and the chemometric analysis of the acquired spectroscopic image data. The first imaging system utilizes a NIR liquid crystal tuneable filter and is designed for the investigation of stationary objects. The second imaging system utilizes a NIR imaging spectrograph and is designed for the fast analysis of moving objects on a conveyor belt. Several drugs in form of tablets and capsules were analyzed. Spectral unmixing techniques were applied to the mixed reflectance spectra to identify constituent parts of the investigated drugs. The results show that NIR spectroscopic imaging can be used for contact-less detection and identification of a variety of counterfeit drugs.

  5. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  6. Implementation of the Pan-STARRS Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Fang, Julia; Aspin, C.

    2007-12-01

    Pan-STARRS, or Panoramic Survey Telescope and Rapid Response System, is a wide-field imaging facility that combines small mirrors with gigapixel cameras. It surveys the entire available sky several times a month, which ultimately requires large amounts of data to be processed and stored right away. Accordingly, the Image Processing Pipeline--the IPP--is a collection of software tools that is responsible for the primary image analysis for Pan-STARRS. It includes data registration, basic image analysis such as obtaining master images and detrending the exposures, mosaic calibration when applicable, and lastly, image sum and difference. In this paper I present my work of the installation of IPP 2.1 and 2.2 on a Linux machine, running the Simtest, which is simulated data to test your installation, and finally applying the IPP to two different sets of UH 2.2m Tek data. This work was conducted by a Research Experience for Undergraduates (REU) position at the University of Hawaii's Institute for Astronomy and funded by the NSF.

  7. Digital imaging of autoradiographs from paintings by Georges de La Tour (1593-1652)

    NASA Astrophysics Data System (ADS)

    Fischer, C.-O.; Gallagher, M.; Laurenze, C.; Schmidt, Ch; Slusallek, K.

    1999-11-01

    The artistic work of the painter Georges de La Tour has been studied very intensively in the last few years, mainly by French and US-American art historians and natural scientists. To support the in-depth analysis of two paintings from the Kimbell Art Museum in Fort Worth, Texas, USA, two similar paintings from the Gemäldegalerie Berlin have been investigated. The method of neutron activation autoradiography has been applied using imaging plates with digital image processing.

  8. Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve J.

    2007-01-01

    Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.

  9. Effect of scene illumination conditions on digital enhancement techniques of multispectral scanner LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J.; Novo, E. M. L. M.

    1983-01-01

    Two sets of MSS/LANDSAT data with solar elevation ranging from 22 deg to 41 deg were used at the Image-100 System to implement the Eliason et alii technique for extracting the topographic modulation component. An unsupervised cluster analysis was used to obtain an average brightness image for each channel. Analysis of the enhanced imaged shows that the technique for extracting topographic modulation component is more appropriated to MSS data obtained under high sun elevation ngles. Low sun elevation increases the variance of each cluster so that the average brightness doesn't represent its albedo proprties. The topographic modulation component applied to low sun elevation angle damages rather than enhance topographic information. Better results were produced for channels 4 and 5 than for channels 6 and 7.

  10. A New Method for Non-destructive Measurement of Biomass, Growth Rates, Vertical Biomass Distribution and Dry Matter Content Based on Digital Image Analysis

    PubMed Central

    Tackenberg, Oliver

    2007-01-01

    Background and Aims Biomass is an important trait in functional ecology and growth analysis. The typical methods for measuring biomass are destructive. Thus, they do not allow the development of individual plants to be followed and they require many individuals to be cultivated for repeated measurements. Non-destructive methods do not have these limitations. Here, a non-destructive method based on digital image analysis is presented, addressing not only above-ground fresh biomass (FBM) and oven-dried biomass (DBM), but also vertical biomass distribution as well as dry matter content (DMC) and growth rates. Methods Scaled digital images of the plants silhouettes were taken for 582 individuals of 27 grass species (Poaceae). Above-ground biomass and DMC were measured using destructive methods. With image analysis software Zeiss KS 300, the projected area and the proportion of greenish pixels were calculated, and generalized linear models (GLMs) were developed with destructively measured parameters as dependent variables and parameters derived from image analysis as independent variables. A bootstrap analysis was performed to assess the number of individuals required for re-calibration of the models. Key Results The results of the developed models showed no systematic errors compared with traditionally measured values and explained most of their variance (R2 ≥ 0·85 for all models). The presented models can be directly applied to herbaceous grasses without further calibration. Applying the models to other growth forms might require a re-calibration which can be based on only 10–20 individuals for FBM or DMC and on 40–50 individuals for DBM. Conclusions The methods presented are time and cost effective compared with traditional methods, especially if development or growth rates are to be measured repeatedly. Hence, they offer an alternative way of determining biomass, especially as they are non-destructive and address not only FBM and DBM, but also vertical biomass distribution and DMC. PMID:17353204

  11. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  12. Human eye-inspired soft optoelectronic device using high-density MoS2-graphene curved image sensor array.

    PubMed

    Choi, Changsoon; Choi, Moon Kee; Liu, Siyi; Kim, Min Sung; Park, Ok Kyu; Im, Changkyun; Kim, Jaemin; Qin, Xiaoliang; Lee, Gil Ju; Cho, Kyoung Won; Kim, Myungbin; Joh, Eehyung; Lee, Jongha; Son, Donghee; Kwon, Seung-Hae; Jeon, Noo Li; Song, Young Min; Lu, Nanshu; Kim, Dae-Hyeong

    2017-11-21

    Soft bioelectronic devices provide new opportunities for next-generation implantable devices owing to their soft mechanical nature that leads to minimal tissue damages and immune responses. However, a soft form of the implantable optoelectronic device for optical sensing and retinal stimulation has not been developed yet because of the bulkiness and rigidity of conventional imaging modules and their composing materials. Here, we describe a high-density and hemispherically curved image sensor array that leverages the atomically thin MoS 2 -graphene heterostructure and strain-releasing device designs. The hemispherically curved image sensor array exhibits infrared blindness and successfully acquires pixelated optical signals. We corroborate the validity of the proposed soft materials and ultrathin device designs through theoretical modeling and finite element analysis. Then, we propose the ultrathin hemispherically curved image sensor array as a promising imaging element in the soft retinal implant. The CurvIS array is applied as a human eye-inspired soft implantable optoelectronic device that can detect optical signals and apply programmed electrical stimulation to optic nerves with minimum mechanical side effects to the retina.

  13. Ship Detection in Optical Satellite Image Based on RX Method and PCAnet

    NASA Astrophysics Data System (ADS)

    Shao, Xiu; Li, Huali; Lin, Hui; Kang, Xudong; Lu, Ting

    2017-12-01

    In this paper, we present a novel method for ship detection in optical satellite image based on the ReedXiaoli (RX) method and the principal component analysis network (PCAnet). The proposed method consists of the following three steps. First, the spatially adjacent pixels in optical image are arranged into a vector, transforming the optical image into a 3D cube image. By taking this process, the contextual information of the spatially adjacent pixels can be integrated to magnify the discrimination between ship and background. Second, the RX anomaly detection method is adopted to preliminarily extract ship candidates from the produced 3D cube image. Finally, real ships are further confirmed among ship candidates by applying the PCAnet and the support vector machine (SVM). Specifically, the PCAnet is a simple deep learning network which is exploited to perform feature extraction, and the SVM is applied to achieve feature pooling and decision making. Experimental results demonstrate that our approach is effective in discriminating between ships and false alarms, and has a good ship detection performance.

  14. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    PubMed

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  15. Imaging bio-distribution of a topically applied dermatological cream on minipig skin using fluorescence lifetime imaging microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Alex, Aneesh; Chaney, Eric J.; Criley, Jennifer M.; Spillman, Darold R.; Hutchison, Phaedra B.; Li, Joanne; Marjanovic, Marina; Frey, Steve; Cook, Steven; Boppart, Stephen A.; Arp, Zane A.

    2017-02-01

    Currently there is a lack of in vivo techniques to evaluate the spatial bio-distribution of dermal drugs over time without the need to take multiple serial biopsies. To address this gap, we investigated the use of multi-photon optical imaging methods to non-invasively track drug distribution on miniature pig (Species: Sus scrofa, Strain: Göttingen) skin in vivo. Minipig skin is the standard comparative research model to human skin, and is anatomically and functionally similar. We employed fluorescence lifetime imaging microscopy (FLIM) to visualize the spatial distribution and residency time of a topically applied experimental dermatological cream. This was made possible by the endogenous fluorescent optical properties of the experimental drug (fluorescence lifetime > 3000 ps). Two different drug formulations were applied on 2 minipigs for 7 consecutive days, with the control creams applied on the contralateral side, followed by 7 days of post-application monitoring using a multi-modal optical imaging system (MPTflex-CARS, JenLab, Germany). FLIM images were obtained from the treated regions 24 hr post-application from day 1 to day 14 that allowed visualization of cellular and sub-cellular features associated with different dermal layers non-invasively to a depth of 200 µm. Five punch biopsies per animal were obtained from the corresponding treated regions between days 8 and 14 for bioanalytical analysis and comparison with results obtained using FLIM. In conclusion, utilization of non-invasive optical biopsy methods for dermal drug evaluation can provide true longitudinal monitoring of drug spatial distribution, remove sampling limitations, and be more time-efficient compared to traditional methods.

  16. Movement Correction Method for Human Brain PET Images: Application to Quantitative Analysis of Dynamic [18F]-FDDNP Scans

    PubMed Central

    Wardak, Mirwais; Wong, Koon-Pong; Shao, Weber; Dahlbom, Magnus; Kepe, Vladimir; Satyamurthy, Nagichettiar; Small, Gary W.; Barrio, Jorge R.; Huang, Sung-Cheng

    2010-01-01

    Head movement during a PET scan (especially, dynamic scan) can affect both the qualitative and quantitative aspects of an image, making it difficult to accurately interpret the results. The primary objective of this study was to develop a retrospective image-based movement correction (MC) method and evaluate its implementation on dynamic [18F]-FDDNP PET images of cognitively intact controls and patients with Alzheimer’s disease (AD). Methods Dynamic [18F]-FDDNP PET images, used for in vivo imaging of beta-amyloid plaques and neurofibrillary tangles, were obtained from 12 AD and 9 age-matched controls. For each study, a transmission scan was first acquired for attenuation correction. An accurate retrospective MC method that corrected for transmission-emission misalignment as well as emission-emission misalignment was applied to all studies. No restriction was assumed for zero movement between the transmission scan and first emission scan. Logan analysis with cerebellum as the reference region was used to estimate various regional distribution volume ratio (DVR) values in the brain before and after MC. Discriminant analysis was used to build a predictive model for group membership, using data with and without MC. Results MC improved the image quality and quantitative values in [18F]-FDDNP PET images. In this subject population, medial temporal (MTL) did not show a significant difference between controls and AD before MC. However, after MC, significant differences in DVR values were seen in frontal, parietal, posterior cingulate (PCG), MTL, lateral temporal (LTL), and global between the two groups (P < 0.05). In controls and AD, the variability of regional DVR values (as measured by the coefficient of variation) decreased on average by >18% after MC. Mean DVR separation between controls and ADs was higher in frontal, MTL, LTL and global after MC. Group classification by discriminant analysis based on [18F]-FDDNP DVR values was markedly improved after MC. Conclusion The streamlined and easy to use MC method presented in this work significantly improves the image quality and the measured tracer kinetics of [18F]-FDDNP PET images. The proposed MC method has the potential to be applied to PET studies on patients having other disorders (e.g., Down syndrome and Parkinson’s disease) and to brain PET scans with other molecular imaging probes. PMID:20080894

  17. Semi-simultaneous application of neutron and X-ray radiography in revealing the defects in an Al casting.

    PubMed

    Balaskó, M; Korösi, F; Szalay, Zs

    2004-10-01

    A semi-simultaneous application of neutron and X-ray radiography (NR, XR) respectively, was applied to an Al casting. The experiments were performed at the 10MW VVR-SM research reactor in Budapest (Hungary). The aim was to reveal, identify and parameterize the hidden defects in the Al casting. The joint application of NR and XR revealed hidden defects located in the Al casting. Image analysis of the NR and XR images unveiled a cone-like dimensionality of the defects. The spectral density analysis of the images showed a distinctly different character for the hidden defect region of Al casting in comparison with that of the defect-free one.

  18. Study of ionospheric anomalies due to impact of typhoon using Principal Component Analysis and image processing

    NASA Astrophysics Data System (ADS)

    LIN, JYH-WOEI

    2012-08-01

    Principal Component Analysis (PCA) and image processing are used to determine Total Electron Content (TEC) anomalies in the F-layer of the ionosphere relating to Typhoon Nakri for 29 May, 2008 (UTC). PCA and image processing are applied to the global ionospheric map (GIM) with transforms conducted for the time period 12:00-14:00 UT on 29 May, 2008 when the wind was most intense. Results show that at a height of approximately 150-200 km the TEC anomaly is highly localized; however, it becomes more intense and widespread with height. Potential causes of these results are discussed with emphasis given to acoustic gravity waves caused by wind force.

  19. 2D hybrid analysis: Approach for building three-dimensional atomic model by electron microscopy image matching.

    PubMed

    Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji

    2017-03-23

    In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.

  20. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  1. SkICAT: A cataloging and analysis tool for wide field imaging surveys

    NASA Technical Reports Server (NTRS)

    Weir, N.; Fayyad, U. M.; Djorgovski, S. G.; Roden, J.

    1992-01-01

    We describe an integrated system, SkICAT (Sky Image Cataloging and Analysis Tool), for the automated reduction and analysis of the Palomar Observatory-ST ScI Digitized Sky Survey. The Survey will consist of the complete digitization of the photographic Second Palomar Observatory Sky Survey (POSS-II) in three bands, comprising nearly three Terabytes of pixel data. SkICAT applies a combination of existing packages, including FOCAS for basic image detection and measurement and SAS for database management, as well as custom software, to the task of managing this wealth of data. One of the most novel aspects of the system is its method of object classification. Using state-of-theart machine learning classification techniques (GID3* and O-BTree), we have developed a powerful method for automatically distinguishing point sources from non-point sources and artifacts, achieving comparably accurate discrimination a full magnitude fainter than in previous Schmidt plate surveys. The learning algorithms produce decision trees for classification by examining instances of objects classified by eye on both plate and higher quality CCD data. The same techniques will be applied to perform higher-level object classification (e.g., of galaxy morphology) in the near future. Another key feature of the system is the facility to integrate the catalogs from multiple plates (and portions thereof) to construct a single catalog of uniform calibration and quality down to the faintest limits of the survey. SkICAT also provides a variety of data analysis and exploration tools for the scientific utilization of the resulting catalogs. We include initial results of applying this system to measure the counts and distribution of galaxies in two bands down to Bj is approximately 21 mag over an approximate 70 square degree multi-plate field from POSS-II. SkICAT is constructed in a modular and general fashion and should be readily adaptable to other large-scale imaging surveys.

  2. Abstracting GIS Layers from Hyperspectral Imagery

    DTIC Science & Technology

    2009-03-01

    Difference Vegetative Index ( NDVI ) 2-20 2.2.10 Separating Trees from Grass . . . . . . . . . . . 2-22 2.3 Spatial Analysis...2-18 2.10. Example of the Normalized Difference Vegetation Index ( NDVI ) applied to a hyperspectral image. . . . . . . . . . . . . . . . . . 2-20...3.5. Example of applying NDVI to a SOM. . . . . . . . . . . . . . . 3-8 3.6. Visualization of the NIR scatter tree ID algorithm. . . . . . . . 3-9 ix

  3. Computer vision applications for coronagraphic optical alignment and image processing.

    PubMed

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  4. Adaptive histogram equalization in digital radiography of destructive skeletal lesions.

    PubMed

    Braunstein, E M; Capek, P; Buckwalter, K; Bland, P; Meyer, C R

    1988-03-01

    Adaptive histogram equalization, an image-processing technique that distributes pixel values of an image uniformly throughout the gray scale, was applied to 28 plain radiographs of bone lesions, after they had been digitized. The non-equalized and equalized digital images were compared by two skeletal radiologists with respect to lesion margins, internal matrix, soft-tissue mass, cortical breakthrough, and periosteal reaction. Receiver operating characteristic (ROC) curves were constructed on the basis of the responses. Equalized images were superior to nonequalized images in determination of cortical breakthrough and presence or absence of periosteal reaction. ROC analysis showed no significant difference in determination of margins, matrix, or soft-tissue masses.

  5. Quantitative Analysis of Rat Dorsal Root Ganglion Neurons Cultured on Microelectrode Arrays Based on Fluorescence Microscopy Image Processing.

    PubMed

    Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo

    2015-12-01

    Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.

  6. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2009-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  7. Medical imaging and computers in the diagnosis of breast cancer

    NASA Astrophysics Data System (ADS)

    Giger, Maryellen L.

    2014-09-01

    Computer-aided diagnosis (CAD) and quantitative image analysis (QIA) methods (i.e., computerized methods of analyzing digital breast images: mammograms, ultrasound, and magnetic resonance images) can yield novel image-based tumor and parenchyma characteristics (i.e., signatures that may ultimately contribute to the design of patient-specific breast cancer management plans). The role of QIA/CAD has been expanding beyond screening programs towards applications in risk assessment, diagnosis, prognosis, and response to therapy as well as in data mining to discover relationships of image-based lesion characteristics with genomics and other phenotypes; thus, as they apply to disease states. These various computer-based applications are demonstrated through research examples from the Giger Lab.

  8. Coloring the FITS Universe

    NASA Astrophysics Data System (ADS)

    Levay, Z. G.

    2004-12-01

    A new, freely-available accessory for Adobe's widely-used Photoshop image editing software makes it much more convenient to produce presentable images directly from FITS data. It merges a fully-functional FITS reader with an intuitive user interface and includes fully interactive flexibility in scaling data. Techniques for producing attractive images from astronomy data using the FITS plugin will be presented, including the assembly of full-color images. These techniques have been successfully applied to producing colorful images for public outreach with data from the Hubble Space Telescope and other major observatories. Now it is much less cumbersome for students or anyone not experienced with specialized astronomical analysis software, but reasonably familiar with digital photography, to produce useful and attractive images.

  9. Noise correlation in PET, CT, SPECT and PET/CT data evaluated using autocorrelation function: a phantom study on data, reconstructed using FBP and OSEM.

    PubMed

    Razifar, Pasha; Sandström, Mattias; Schnieder, Harald; Långström, Bengt; Maripuu, Enn; Bengtsson, Ewert; Bergström, Mats

    2005-08-25

    Positron Emission Tomography (PET), Computed Tomography (CT), PET/CT and Single Photon Emission Tomography (SPECT) are non-invasive imaging tools used for creating two dimensional (2D) cross section images of three dimensional (3D) objects. PET and SPECT have the potential of providing functional or biochemical information by measuring distribution and kinetics of radiolabelled molecules, whereas CT visualizes X-ray density in tissues in the body. PET/CT provides fused images representing both functional and anatomical information with better precision in localization than PET alone. Images generated by these types of techniques are generally noisy, thereby impairing the imaging potential and affecting the precision in quantitative values derived from the images. It is crucial to explore and understand the properties of noise in these imaging techniques. Here we used autocorrelation function (ACF) specifically to describe noise correlation and its non-isotropic behaviour in experimentally generated images of PET, CT, PET/CT and SPECT. Experiments were performed using phantoms with different shapes. In PET and PET/CT studies, data were acquired in 2D acquisition mode and reconstructed by both analytical filter back projection (FBP) and iterative, ordered subsets expectation maximisation (OSEM) methods. In the PET/CT studies, different magnitudes of X-ray dose in the transmission were employed by using different mA settings for the X-ray tube. In the CT studies, data were acquired using different slice thickness with and without applied dose reduction function and the images were reconstructed by FBP. SPECT studies were performed in 2D, reconstructed using FBP and OSEM, using post 3D filtering. ACF images were generated from the primary images, and profiles across the ACF images were used to describe the noise correlation in different directions. The variance of noise across the images was visualised as images and with profiles across these images. The most important finding was that the pattern of noise correlation is rotation symmetric or isotropic, independent of object shape in PET and PET/CT images reconstructed using the iterative method. This is, however, not the case in FBP images when the shape of phantom is not circular. Also CT images reconstructed using FBP show the same non-isotropic pattern independent of slice thickness and utilization of care dose function. SPECT images show an isotropic correlation of the noise independent of object shape or applied reconstruction algorithm. Noise in PET/CT images was identical independent of the applied X-ray dose in the transmission part (CT), indicating that the noise from transmission with the applied doses does not propagate into the PET images showing that the noise from the emission part is dominant. The results indicate that in human studies it is possible to utilize a low dose in transmission part while maintaining the noise behaviour and the quality of the images. The combined effect of noise correlation for asymmetric objects and a varying noise variance across the image field significantly complicates the interpretation of the images when statistical methods are used, such as with statistical estimates of precision in average values, use of statistical parametric mapping methods and principal component analysis. Hence it is recommended that iterative reconstruction methods are used for such applications. However, it is possible to calculate the noise analytically in images reconstructed by FBP, while it is not possible to do the same calculation in images reconstructed by iterative methods. Therefore for performing statistical methods of analysis which depend on knowing the noise, FBP would be preferred.

  10. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format.

    PubMed

    Ahmed, Zeeshan; Dandekar, Thomas

    2015-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool 'Mining Scientific Literature (MSL)', which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system's output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format.

  11. Deformable image registration as a tool to improve survival prediction after neoadjuvant chemotherapy for breast cancer: results from the ACRIN 6657/I-SPY-1 trial

    NASA Astrophysics Data System (ADS)

    Jahani, Nariman; Cohen, Eric; Hsieh, Meng-Kang; Weinstein, Susan P.; Pantalone, Lauren; Davatzikos, Christos; Kontos, Despina

    2018-02-01

    We examined the ability of DCE-MRI longitudinal features to give early prediction of recurrence-free survival (RFS) in women undergoing neoadjuvant chemotherapy for breast cancer, in a retrospective analysis of 106 women from the ISPY 1 cohort. These features were based on the voxel-wise changes seen in registered images taken before treatment and after the first round of chemotherapy. We computed the transformation field using a robust deformable image registration technique to match breast images from these two visits. Using the deformation field, parametric response maps (PRM) — a voxel-based feature analysis of longitudinal changes in images between visits — was computed for maps of four kinetic features (signal enhancement ratio, peak enhancement, and wash-in/wash-out slopes). A two-level discrete wavelet transform was applied to these PRMs to extract heterogeneity information about tumor change between visits. To estimate survival, a Cox proportional hazard model was applied with the C statistic as the measure of success in predicting RFS. The best PRM feature (as determined by C statistic in univariable analysis) was determined for each of the four kinetic features. The baseline model, incorporating functional tumor volume, age, race, and hormone response status, had a C statistic of 0.70 in predicting RFS. The model augmented with the four PRM features had a C statistic of 0.76. Thus, our results suggest that adding information on the texture of voxel-level changes in tumor kinetic response between registered images of first and second visits could improve early RFS prediction in breast cancer after neoadjuvant chemotherapy.

  12. Automatic classification of retinal three-dimensional optical coherence tomography images using principal component analysis network with composite kernels

    NASA Astrophysics Data System (ADS)

    Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein

    2017-11-01

    We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.

  13. Web-accessible cervigram automatic segmentation tool

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2010-03-01

    Uterine cervix image analysis is of great importance to the study of uterine cervix cancer, which is among the leading cancers affecting women worldwide. In this paper, we describe our proof-of-concept, Web-accessible system for automated segmentation of significant tissue regions in uterine cervix images, which also demonstrates our research efforts toward promoting collaboration between engineers and physicians for medical image analysis projects. Our design and implementation unifies the merits of two commonly used languages, MATLAB and Java. It circumvents the heavy workload of recoding the sophisticated segmentation algorithms originally developed in MATLAB into Java while allowing remote users who are not experienced programmers and algorithms developers to apply those processing methods to their own cervicographic images and evaluate the algorithms. Several other practical issues of the systems are also discussed, such as the compression of images and the format of the segmentation results.

  14. Quantitative analysis of ultrasonic images of fibrotic liver using co-occurrence matrix based on multi-Rayleigh model

    NASA Astrophysics Data System (ADS)

    Isono, Hiroshi; Hirata, Shinnosuke; Hachiya, Hiroyuki

    2015-07-01

    In medical ultrasonic images of liver disease, a texture with a speckle pattern indicates a microscopic structure such as nodules surrounded by fibrous tissues in hepatitis or cirrhosis. We have been applying texture analysis based on a co-occurrence matrix to ultrasonic images of fibrotic liver for quantitative tissue characterization. A co-occurrence matrix consists of the probability distribution of brightness of pixel pairs specified with spatial parameters and gives new information on liver disease. Ultrasonic images of different types of fibrotic liver were simulated and the texture-feature contrast was calculated to quantify the co-occurrence matrices generated from the images. The results show that the contrast converges with a value that can be theoretically estimated using a multi-Rayleigh model of echo signal amplitude distribution. We also found that the contrast value increases as liver fibrosis progresses and fluctuates depending on the size of fibrotic structure.

  15. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  16. Quantifying the Onset and Progression of Plant Senescence by Color Image Analysis for High Throughput Applications

    PubMed Central

    Cai, Jinhai; Okamoto, Mamoru; Atieno, Judith; Sutton, Tim; Li, Yongle; Miklavcic, Stanley J.

    2016-01-01

    Leaf senescence, an indicator of plant age and ill health, is an important phenotypic trait for the assessment of a plant’s response to stress. Manual inspection of senescence, however, is time consuming, inaccurate and subjective. In this paper we propose an objective evaluation of plant senescence by color image analysis for use in a high throughput plant phenotyping pipeline. As high throughput phenotyping platforms are designed to capture whole-of-plant features, camera lenses and camera settings are inappropriate for the capture of fine detail. Specifically, plant colors in images may not represent true plant colors, leading to errors in senescence estimation. Our algorithm features a color distortion correction and image restoration step prior to a senescence analysis. We apply our algorithm to two time series of images of wheat and chickpea plants to quantify the onset and progression of senescence. We compare our results with senescence scores resulting from manual inspection. We demonstrate that our procedure is able to process images in an automated way for an accurate estimation of plant senescence even from color distorted and blurred images obtained under high throughput conditions. PMID:27348807

  17. Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.

    2016-02-15

    Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less

  18. Dose reduction in abdominal computed tomography: intraindividual comparison of image quality of full-dose standard and half-dose iterative reconstructions with dual-source computed tomography.

    PubMed

    May, Matthias S; Wüst, Wolfgang; Brand, Michael; Stahl, Christian; Allmendinger, Thomas; Schmidt, Bernhard; Uder, Michael; Lell, Michael M

    2011-07-01

    We sought to evaluate the image quality of iterative reconstruction in image space (IRIS) in half-dose (HD) datasets compared with full-dose (FD) and HD filtered back projection (FBP) reconstruction in abdominal computed tomography (CT). To acquire data with FD and HD simultaneously, contrast-enhanced abdominal CT was performed with a dual-source CT system, both tubes operating at 120 kV, 100 ref.mAs, and pitch 0.8. Three different image datasets were reconstructed from the raw data: Standard FD images applying FBP which served as reference, HD images applying FBP and HD images applying IRIS. For the HD data sets, only data from 1 tube detector-system was used. Quantitative image quality analysis was performed by measuring image noise in tissue and air. Qualitative image quality was evaluated according to the European Guidelines on Quality criteria for CT. Additional assessment of artifacts, lesion conspicuity, and edge sharpness was performed. : Image noise in soft tissue was substantially decreased in HD-IRIS (-3.4 HU, -22%) and increased in HD-FBP (+6.2 HU, +39%) images when compared with the reference (mean noise, 15.9 HU). No significant differences between the FD-FBP and HD-IRIS images were found for the visually sharp anatomic reproduction, overall diagnostic acceptability (P = 0.923), lesion conspicuity (P = 0.592), and edge sharpness (P = 0.589), while HD-FBP was rated inferior. Streak artifacts and beam hardening was significantly more prominent in HD-FBP while HD-IRIS images exhibited a slightly different noise pattern. Direct intrapatient comparison of standard FD body protocols and HD-IRIS reconstruction suggest that the latest iterative reconstruction algorithms allow for approximately 50% dose reduction without deterioration of the high image quality necessary for confident diagnosis.

  19. Review on Microstructure Analysis of Metals and Alloys Using Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Rekha, Suganthini; Bupesh Raja, V. K.

    2017-05-01

    The metals and alloys find vast application in engineering and domestic sectors. The mechanical properties of the metals and alloys are influenced by their microstructure. Hence the microstructural investigation is very critical. Traditionally the microstructure is studied using optical microscope with suitable metallurgical preparation. The past few decades the computers are applied in the capture and analysis of the optical micrographs. The advent of computer softwares like digital image processing and computer vision technologies are a boon to the analysis of the microstructure. In this paper the literature study of the various developments in the microstructural analysis, is done. The conventional optical microscope is complemented by the use of Scanning Electron Microscope (SEM) and other high end equipments.

  20. A CAD Approach to Integrating NDE With Finite Element

    NASA Technical Reports Server (NTRS)

    Abdul-Aziz, Ali; Downey, James; Ghosn, Louis J.; Baaklini, George Y.

    2004-01-01

    Nondestructive evaluation (NDE) is one of several technologies applied at NASA Glenn Research Center to determine atypical deformities, cracks, and other anomalies experienced by structural components. NDE consists of applying high-quality imaging techniques (such as x-ray imaging and computed tomography (CT)) to discover hidden manufactured flaws in a structure. Efforts are in progress to integrate NDE with the finite element (FE) computational method to perform detailed structural analysis of a given component. This report presents the core outlines for an in-house technical procedure that incorporates this combined NDE-FE interrelation. An example is presented to demonstrate the applicability of this analytical procedure. FE analysis of a test specimen is performed, and the resulting von Mises stresses and the stress concentrations near the anomalies are observed, which indicates the fidelity of the procedure. Additional information elaborating on the steps needed to perform such an analysis is clearly presented in the form of mini step-by-step guidelines.

  1. Image-Based 3d Reconstruction and Analysis for Orthodontia

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  2. Room acoustics analysis using circular arrays: an experimental study based on sound field plane-wave decomposition.

    PubMed

    Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo

    2013-04-01

    Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.

  3. A study and evaluation of image analysis techniques applied to remotely sensed data

    NASA Technical Reports Server (NTRS)

    Atkinson, R. J.; Dasarathy, B. V.; Lybanon, M.; Ramapriyan, H. K.

    1976-01-01

    An analysis of phenomena causing nonlinearities in the transformation from Landsat multispectral scanner coordinates to ground coordinates is presented. Experimental results comparing rms errors at ground control points indicated a slight improvement when a nonlinear (8-parameter) transformation was used instead of an affine (6-parameter) transformation. Using a preliminary ground truth map of a test site in Alabama covering the Mobile Bay area and six Landsat images of the same scene, several classification methods were assessed. A methodology was developed for automatic change detection using classification/cluster maps. A coding scheme was employed for generation of change depiction maps indicating specific types of changes. Inter- and intraseasonal data of the Mobile Bay test area were compared to illustrate the method. A beginning was made in the study of data compression by applying a Karhunen-Loeve transform technique to a small section of the test data set. The second part of the report provides a formal documentation of the several programs developed for the analysis and assessments presented.

  4. Magnetic Levitation Coupled with Portable Imaging and Analysis for Disease Diagnostics.

    PubMed

    Knowlton, Stephanie M; Yenilmez, Bekir; Amin, Reza; Tasoglu, Savas

    2017-02-19

    Currently, many clinical diagnostic procedures are complex, costly, inefficient, and inaccessible to a large population in the world. The requirements for specialized equipment and trained personnel require that many diagnostic tests be performed at remote, centralized clinical laboratories. Magnetic levitation is a simple yet powerful technique and can be applied to levitate cells, which are suspended in a paramagnetic solution and placed in a magnetic field, at a position determined by equilibrium between a magnetic force and a buoyancy force. Here, we present a versatile platform technology designed for point-of-care diagnostics which uses magnetic levitation coupled to microscopic imaging and automated analysis to determine the density distribution of a patient's cells as a useful diagnostic indicator. We present two platforms operating on this principle: (i) a smartphone-compatible version of the technology, where the built-in smartphone camera is used to image cells in the magnetic field and a smartphone application processes the images and to measures the density distribution of the cells and (ii) a self-contained version where a camera board is used to capture images and an embedded processing unit with attached thin-film-transistor (TFT) screen measures and displays the results. Demonstrated applications include: (i) measuring the altered distribution of a cell population with a disease phenotype compared to a healthy phenotype, which is applied to sickle cell disease diagnosis, and (ii) separation of different cell types based on their characteristic densities, which is applied to separate white blood cells from red blood cells for white blood cell cytometry. These applications, as well as future extensions of the essential density-based measurements enabled by this portable, user-friendly platform technology, will significantly enhance disease diagnostic capabilities at the point of care.

  5. Quantitative analysis of in vivo mucosal bacterial biofilms.

    PubMed

    Singhal, Deepti; Boase, Sam; Field, John; Jardeleza, Camille; Foreman, Andrew; Wormald, Peter-John

    2012-01-01

    Quantitative assays of mucosal biofilms on ex vivo samples are challenging using the currently applied specialized microscopic techniques to identify them. The COMSTAT2 computer program has been applied to in vitro biofilm models for quantifying biofilm structures seen on confocal scanning laser microscopy (CSLM). The aim of this study was to quantify Staphylococcus aureus (S. aureus) biofilms seen via CSLM on ex situ samples of sinonasal mucosa, using the COMSTAT2 program. S. aureus biofilms were grown in frontal sinuses of 4 merino sheep as per a previously standardized sheep sinusitis model for biofilms. Two sinonasal mucosal samples, 10 mm × 10 mm in size, from each of the 2 sinuses of the 4 sheep were analyzed for biofilm presence with Baclight stain and CSLM. Two random image stacks of mucosa with S. aureus biofilm were recorded from each sample, and analyzed using COMSTAT2 software that translates image stacks into a simplified 3-dimensional matrix of biofilm mass by eliminating surrounding host tissue. Three independent observers analyzed images using COMSTAT2 and 3 repeated rounds of analyses were done to calculate biofilm biomass. The COMSTAT2 application uses an observer-dependent threshold setting to translate CSLM biofilm images into a simplified 3-dimensional output for quantitative analysis. Intraclass correlation coefficient (ICC) between thresholds set by the 3 observers for each image stacks was 0.59 (p = 0.0003). Threshold values set at different points of time by a single observer also showed significant correlation as seen by ICC of 0.80 (p < 0.001). COMSTAT2 can be applied to quantify and study the complex 3-dimensional biofilm structures that are recorded via CSLM on mucosal tissue like the sinonasal mucosa. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.

  6. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  7. Performance comparison of ISAR imaging method based on time frequency transforms

    NASA Astrophysics Data System (ADS)

    Xie, Chunjian; Guo, Chenjiang; Xu, Jiadong

    2013-03-01

    Inverse synthetic aperture radar (ISAR) can image the moving target, especially the target in the air, so it is important in the air defence and missile defence system. Time-frequency Transform was applied to ISAR imaging process widely. Several time frequency transforms were introduced. Noise jamming methods were analysed, and when these noise jamming were added to the echo of the ISAR receiver, the image can become blur even can't to be identify. But the effect is different to the different time frequency analysis. The results of simulation experiment show the Performance Comparison of the method.

  8. Topographic profiling and refractive-index analysis by use of differential interference contrast with bright-field intensity and atomic force imaging.

    PubMed

    Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim

    2004-04-10

    A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.

  9. Automatic co-segmentation of lung tumor based on random forest in PET-CT images

    NASA Astrophysics Data System (ADS)

    Jiang, Xueqing; Xiang, Dehui; Zhang, Bin; Zhu, Weifang; Shi, Fei; Chen, Xinjian

    2016-03-01

    In this paper, a fully automatic method is proposed to segment the lung tumor in clinical 3D PET-CT images. The proposed method effectively combines PET and CT information to make full use of the high contrast of PET images and superior spatial resolution of CT images. Our approach consists of three main parts: (1) initial segmentation, in which spines are removed in CT images and initial connected regions achieved by thresholding based segmentation in PET images; (2) coarse segmentation, in which monotonic downhill function is applied to rule out structures which have similar standardized uptake values (SUV) to the lung tumor but do not satisfy a monotonic property in PET images; (3) fine segmentation, random forests method is applied to accurately segment the lung tumor by extracting effective features from PET and CT images simultaneously. We validated our algorithm on a dataset which consists of 24 3D PET-CT images from different patients with non-small cell lung cancer (NSCLC). The average TPVF, FPVF and accuracy rate (ACC) were 83.65%, 0.05% and 99.93%, respectively. The correlation analysis shows our segmented lung tumor volumes has strong correlation ( average 0.985) with the ground truth 1 and ground truth 2 labeled by a clinical expert.

  10. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  11. Directly imaging steeply-dipping fault zones in geothermal fields with multicomponent seismic data

    DOE PAGES

    Chen, Ting; Huang, Lianjie

    2015-07-30

    For characterizing geothermal systems, it is important to have clear images of steeply-dipping fault zones because they may confine the boundaries of geothermal reservoirs and influence hydrothermal flow. Elastic reverse-time migration (ERTM) is the most promising tool for subsurface imaging with multicomponent seismic data. However, conventional ERTM usually generates significant artifacts caused by the cross correlation of undesired wavefields and the polarity reversal of shear waves. In addition, it is difficult for conventional ERTM to directly image steeply-dipping fault zones. We develop a new ERTM imaging method in this paper to reduce these artifacts and directly image steeply-dipping fault zones.more » In our new ERTM method, forward-propagated source wavefields and backward-propagated receiver wavefields are decomposed into compressional (P) and shear (S) components. Furthermore, each component of these wavefields is separated into left- and right-going, or downgoing and upgoing waves. The cross correlation imaging condition is applied to the separated wavefields along opposite propagation directions. For converted waves (P-to-S or S-to-P), the polarity correction is applied to the separated wavefields based on the analysis of Poynting vectors. Numerical imaging examples of synthetic seismic data demonstrate that our new ERTM method produces high-resolution images of steeply-dipping fault zones.« less

  12. Fast and automatic algorithm for optic disc extraction in retinal images using principle-component-analysis-based preprocessing and curvelet transform.

    PubMed

    Shahbeig, Saleh; Pourghassem, Hossein

    2013-01-01

    Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.

  13. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  14. Blood pulsation measurement using cameras operating in visible light: limitations.

    PubMed

    Koprowski, Robert

    2016-10-03

    The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).

  15. Confocal Raman imaging and chemometrics applied to solve forensic document examination involving crossed lines and obliteration cases by a depth profiling study.

    PubMed

    Borba, Flávia de Souza Lins; Jawhari, Tariq; Saldanha Honorato, Ricardo; de Juan, Anna

    2017-03-27

    This article describes a non-destructive analytical method developed to solve forensic document examination problems involving crossed lines and obliteration. Different strategies combining confocal Raman imaging and multivariate curve resolution-alternating least squares (MCR-ALS) are presented. Multilayer images were acquired at subsequent depth layers into the samples. It is the first time that MCR-ALS is applied to multilayer images for forensic purposes. In this context, this method provides a single set of pure spectral ink signatures and related distribution maps for all layers examined from the sole information in the raw measurement. Four cases were investigated, namely, two concerning crossed lines with different degrees of ink similarity and two related to obliteration, where previous or no knowledge about the identity of the obliterated ink was available. In the crossing line scenario, MCR-ALS analysis revealed the ink nature and the chronological order in which strokes were drawn. For obliteration cases, results making active use of information about the identity of the obliterated ink in the chemometric analysis were of similar quality as those where the identity of the obliterated ink was unknown. In all obliteration scenarios, the identity of inks and the obliterated text were satisfactorily recovered. The analytical methodology proposed is of general use for analytical forensic document examination problems, and considers different degrees of complexity and prior available information. Besides, the strategies of data analysis proposed can be applicable to any other kind of problem in which multilayer Raman images from multicomponent systems have to be interpreted.

  16. Image classification of unlabeled malaria parasites in red blood cells.

    PubMed

    Zheng Zhang; Ong, L L Sharon; Kong Fang; Matthew, Athul; Dauwels, Justin; Ming Dao; Asada, Harry

    2016-08-01

    This paper presents a method to detect unlabeled malaria parasites in red blood cells. The current "gold standard" for malaria diagnosis is microscopic examination of thick blood smear, a time consuming process requiring extensive training. Our goal is to develop an automate process to identify malaria infected red blood cells. Major issues in automated analysis of microscopy images of unstained blood smears include overlapping cells and oddly shaped cells. Our approach creates robust templates to detect infected and uninfected red cells. Histogram of Oriented Gradients (HOGs) features are extracted from templates and used to train a classifier offline. Next, the ViolaJones object detection framework is applied to detect infected and uninfected red cells and the image background. Results show our approach out-performs classification approaches with PCA features by 50% and cell detection algorithms applying Hough transforms by 24%. Majority of related work are designed to automatically detect stained parasites in blood smears where the cells are fixed. Although it is more challenging to design algorithms for unstained parasites, our methods will allow analysis of parasite progression in live cells under different drug treatments.

  17. Finger crease pattern recognition using Legendre moments and principal component analysis

    NASA Astrophysics Data System (ADS)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  18. A spectral approach for the quantitative description of cardiac collagen network from nonlinear optical imaging.

    PubMed

    Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia

    2015-01-01

    The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.

  19. A Molecular Iodine Spectral Data Set for Rovibronic Analysis

    ERIC Educational Resources Information Center

    Williamson, J. Charles; Kuntzleman, Thomas S.; Kafader, Rachael A.

    2013-01-01

    A data set of 7,381 molecular iodine vapor rovibronic transitions between the X and B electronic states has been prepared for an advanced undergraduate spectroscopic analysis project. Students apply standard theoretical techniques to these data and determine the values of three X-state constants (image omitted) and four B-state constants (image…

  20. Mining hidden data to predict patient prognosis: texture feature extraction and machine learning in mammography

    NASA Astrophysics Data System (ADS)

    Leighs, J. A.; Halling-Brown, M. D.; Patel, M. N.

    2018-03-01

    The UK currently has a national breast cancer-screening program and images are routinely collected from a number of screening sites, representing a wealth of invaluable data that is currently under-used. Radiologists evaluate screening images manually and recall suspicious cases for further analysis such as biopsy. Histological testing of biopsy samples confirms the malignancy of the tumour, along with other diagnostic and prognostic characteristics such as disease grade. Machine learning is becoming increasingly popular for clinical image classification problems, as it is capable of discovering patterns in data otherwise invisible. This is particularly true when applied to medical imaging features; however clinical datasets are often relatively small. A texture feature extraction toolkit has been developed to mine a wide range of features from medical images such as mammograms. This study analysed a dataset of 1,366 radiologist-marked, biopsy-proven malignant lesions obtained from the OPTIMAM Medical Image Database (OMI-DB). Exploratory data analysis methods were employed to better understand extracted features. Machine learning techniques including Classification and Regression Trees (CART), ensemble methods (e.g. random forests), and logistic regression were applied to the data to predict the disease grade of the analysed lesions. Prediction scores of up to 83% were achieved; sensitivity and specificity of the models trained have been discussed to put the results into a clinical context. The results show promise in the ability to predict prognostic indicators from the texture features extracted and thus enable prioritisation of care for patients at greatest risk.

  1. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis

    PubMed Central

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    AIM To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. METHODS This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. RESULTS It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). CONCLUSION The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals. PMID:26309878

  2. Multispectral UV imaging for fast and non-destructive quality control of chemical and physical tablet attributes.

    PubMed

    Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S

    2016-07-30

    Monitoring of tablet quality attributes in direct vicinity of the production process requires analytical techniques that allow fast, non-destructive, and accurate tablet characterization. The overall objective of this study was to investigate the applicability of multispectral UV imaging as a reliable, rapid technique for estimation of the tablet API content and tablet hardness, as well as determination of tablet intactness and the tablet surface density profile. One of the aims was to establish an image analysis approach based on multivariate image analysis and pattern recognition to evaluate the potential of UV imaging for automatized quality control of tablets with respect to their intactness and surface density profile. Various tablets of different composition and different quality regarding their API content, radial tensile strength, intactness, and surface density profile were prepared using an eccentric as well as a rotary tablet press at compression pressures from 20MPa up to 410MPa. It was found, that UV imaging can provide both, relevant information on chemical and physical tablet attributes. The tablet API content and radial tensile strength could be estimated by UV imaging combined with partial least squares analysis. Furthermore, an image analysis routine was developed and successfully applied to the UV images that provided qualitative information on physical tablet surface properties such as intactness and surface density profiles, as well as quantitative information on variations in the surface density. In conclusion, this study demonstrates that UV imaging combined with image analysis is an effective and non-destructive method to determine chemical and physical quality attributes of tablets and is a promising approach for (near) real-time monitoring of the tablet compaction process and formulation optimization purposes. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Characterisation of human non-proliferative diabetic retinopathy using the fractal analysis.

    PubMed

    Ţălu, Ştefan; Călugăru, Dan Mihai; Lupaşcu, Carmen Alina

    2015-01-01

    To investigate and quantify changes in the branching patterns of the retina vascular network in diabetes using the fractal analysis method. This was a clinic-based prospective study of 172 participants managed at the Ophthalmological Clinic of Cluj-Napoca, Romania, between January 2012 and December 2013. A set of 172 segmented and skeletonized human retinal images, corresponding to both normal (24 images) and pathological (148 images) states of the retina were examined. An automatic unsupervised method for retinal vessel segmentation was applied before fractal analysis. The fractal analyses of the retinal digital images were performed using the fractal analysis software ImageJ. Statistical analyses were performed for these groups using Microsoft Office Excel 2003 and GraphPad InStat software. It was found that subtle changes in the vascular network geometry of the human retina are influenced by diabetic retinopathy (DR) and can be estimated using the fractal geometry. The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values of mild non-proliferative DR (NPDR) images (segmented and skeletonized versions). The average of fractal dimensions D for the normal images (segmented and skeletonized versions) is higher than the corresponding values of moderate NPDR images (segmented and skeletonized versions). The lowest values were found for the corresponding values of severe NPDR images (segmented and skeletonized versions). The fractal analysis of fundus photographs may be used for a more complete undeTrstanding of the early and basic pathophysiological mechanisms of diabetes. The architecture of the retinal microvasculature in diabetes can be quantitative quantified by means of the fractal dimension. Microvascular abnormalities on retinal imaging may elucidate early mechanistic pathways for microvascular complications and distinguish patients with DR from healthy individuals.

  4. Quantitative analysis of rib movement based on dynamic chest bone images: preliminary results

    NASA Astrophysics Data System (ADS)

    Tanaka, R.; Sanada, S.; Oda, M.; Mitsutaka, M.; Suzuki, K.; Sakuta, K.; Kawashima, H.

    2014-03-01

    Rib movement during respiration is one of the diagnostic criteria in pulmonary impairments. In general, the rib movement is assessed in fluoroscopy. However, the shadows of lung vessels and bronchi overlapping ribs prevent accurate quantitative analysis of rib movement. Recently, an image-processing technique for separating bones from soft tissue in static chest radiographs, called "bone suppression technique", has been developed. Our purpose in this study was to evaluate the usefulness of dynamic bone images created by the bone suppression technique in quantitative analysis of rib movement. Dynamic chest radiographs of 10 patients were obtained using a dynamic flat-panel detector (FPD). Bone suppression technique based on a massive-training artificial neural network (MTANN) was applied to the dynamic chest images to create bone images. Velocity vectors were measured in local areas on the dynamic bone images, which formed a map. The velocity maps obtained with bone and original images for scoliosis and normal cases were compared to assess the advantages of bone images. With dynamic bone images, we were able to quantify and distinguish movements of ribs from those of other lung structures accurately. Limited rib movements of scoliosis patients appeared as reduced rib velocity vectors. Vector maps in all normal cases exhibited left-right symmetric distributions, whereas those in abnormal cases showed nonuniform distributions. In conclusion, dynamic bone images were useful for accurate quantitative analysis of rib movements: Limited rib movements were indicated as a reduction of rib movement and left-right asymmetric distribution on vector maps. Thus, dynamic bone images can be a new diagnostic tool for quantitative analysis of rib movements without additional radiation dose.

  5. Left ventricular endocardial surface detection based on real-time 3D echocardiographic data

    NASA Technical Reports Server (NTRS)

    Corsi, C.; Borsari, M.; Consegnati, F.; Sarti, A.; Lamberti, C.; Travaglini, A.; Shiota, T.; Thomas, J. D.

    2001-01-01

    OBJECTIVE: A new computerized semi-automatic method for left ventricular (LV) chamber segmentation is presented. METHODS: The LV is imaged by real-time three-dimensional echocardiography (RT3DE). The surface detection model, based on level set techniques, is applied to RT3DE data for image analysis. The modified level set partial differential equation we use is solved by applying numerical methods for conservation laws. The initial conditions are manually established on some slices of the entire volume. The solution obtained for each slice is a contour line corresponding with the boundary between LV cavity and LV endocardium. RESULTS: The mathematical model has been applied to sequences of frames of human hearts (volume range: 34-109 ml) imaged by 2D and reconstructed off-line and RT3DE data. Volume estimation obtained by this new semi-automatic method shows an excellent correlation with those obtained by manual tracing (r = 0.992). Dynamic change of LV volume during the cardiac cycle is also obtained. CONCLUSION: The volume estimation method is accurate; edge based segmentation, image completion and volume reconstruction can be accomplished. The visualization technique also allows to navigate into the reconstructed volume and to display any section of the volume.

  6. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were applied to different archaeological sites in Turkmenistan (Nisa) and in Iraq (Babylon); a further change detection analysis was applied to the Babylon site using two HR images as a pre-post second gulf war. We had different results or outputs, taking into consideration the fact that the operative scale of sensed data determines the final result of the elaboration and the output of the information quality, because each of them was sensitive to specific shapes in each input image, we had mapped linear and nonlinear objects, updating archaeological cartography, automatic change detection analysis for the Babylon site. The discussion of these techniques has the objective to provide the archaeological team with new instruments for the orientation and the planning of a remote sensing application.

  7. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    PubMed Central

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  8. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  9. Image-based quantification and mathematical modeling of spatial heterogeneity in ESC colonies.

    PubMed

    Herberg, Maria; Zerjatke, Thomas; de Back, Walter; Glauche, Ingmar; Roeder, Ingo

    2015-06-01

    Pluripotent embryonic stem cells (ESCs) have the potential to differentiate into cells of all three germ layers. This unique property has been extensively studied on the intracellular, transcriptional level. However, ESCs typically form clusters of cells with distinct size and shape, and establish spatial structures that are vital for the maintenance of pluripotency. Even though it is recognized that the cells' arrangement and local interactions play a role in fate decision processes, the relations between transcriptional and spatial patterns have not yet been studied. We present a systems biology approach which combines live-cell imaging, quantitative image analysis, and multiscale, mathematical modeling of ESC growth. In particular, we develop quantitative measures of the morphology and of the spatial clustering of ESCs with different expression levels and apply them to images of both in vitro and in silico cultures. Using the same measures, we are able to compare model scenarios with different assumptions on cell-cell adhesions and intercellular feedback mechanisms directly with experimental data. Applying our methodology to microscopy images of cultured ESCs, we demonstrate that the emerging colonies are highly variable regarding both morphological and spatial fluorescence patterns. Moreover, we can show that most ESC colonies contain only one cluster of cells with high self-renewing capacity. These cells are preferentially located in the interior of a colony structure. The integrated approach combining image analysis with mathematical modeling allows us to reveal potential transcription factor related cellular and intercellular mechanisms behind the emergence of observed patterns that cannot be derived from images directly. © 2015 International Society for Advancement of Cytometry.

  10. FLIM data analysis of NADH and Tryptophan autofluorescence in prostate cancer cells

    NASA Astrophysics Data System (ADS)

    O'Melia, Meghan J.; Wallrabe, Horst; Svindrych, Zdenek; Rehman, Shagufta; Periasamy, Ammasi

    2016-03-01

    Fluorescence lifetime imaging microscopy (FLIM) is one of the most sensitive techniques to measure metabolic activity in living cells, tissues and whole animals. We used two- and three-photon fluorescence excitation together with time-correlated single photon counting (TCSPC) to acquire FLIM signals from normal and prostate cancer cell lines. FLIM requires complex data fitting and analysis; we explored different ways to analyze the data to match diverse cellular morphologies. After non-linear least square fitting of the multi-photon TCSPC images by the SPCImage software (Becker & Hickl), all image data are exported and further processed in ImageJ. Photon images provide morphological, NAD(P)H signal-based autofluorescent features, for which regions of interest (ROIs) are created. Applying these ROIs to all image data parameters with a custom ImageJ macro, generates a discrete, ROI specific database. A custom Excel (Microsoft) macro further analyzes the data with charts and statistics. Applying this highly automated assay we compared normal and cancer prostate cell lines with respect to their glycolytic activity by analyzing the NAD(P)H-bound fraction (a2%), NADPH/NADH ratio and efficiency of energy transfer (E%) for Tryptophan (Trp). Our results show that this assay is able to differentiate the effects of glucose stimulation and Doxorubicin in these prostate cell lines by tracking the changes in a2% of NAD(P)H, NADPH/NADH ratio and the changes in Trp E%. The ability to isolate a large, ROI-based data set, reflecting the heterogeneous cellular environment and highlighting even subtle changes -- rather than whole cell averages - makes this assay particularly valuable.

  11. Image analysis applied to luminescence microscopy

    NASA Astrophysics Data System (ADS)

    Maire, Eric; Lelievre-Berna, Eddy; Fafeur, Veronique; Vandenbunder, Bernard

    1998-04-01

    We have developed a novel approach to study luminescent light emission during migration of living cells by low-light imaging techniques. The equipment consists in an anti-vibration table with a hole for a direct output under the frame of an inverted microscope. The image is directly captured by an ultra low- light level photon-counting camera equipped with an image intensifier coupled by an optical fiber to a CCD sensor. This installation is dedicated to measure in a dynamic manner the effect of SF/HGF (Scatter Factor/Hepatocyte Growth Factor) both on activation of gene promoter elements and on cell motility. Epithelial cells were stably transfected with promoter elements containing Ets transcription factor-binding sites driving a luciferase reporter gene. Luminescent light emitted by individual cells was measured by image analysis. Images of luminescent spots were acquired with a high aperture objective and time exposure of 10 - 30 min in photon-counting mode. The sensitivity of the camera was adjusted to a high value which required the use of a segmentation algorithm dedicated to eliminate the background noise. Hence, image segmentation and treatments by mathematical morphology were particularly indicated in these experimental conditions. In order to estimate the orientation of cells during their migration, we used a dedicated skeleton algorithm applied to the oblong spots of variable intensities emitted by the cells. Kinetic changes of luminescent sources, distance and speed of migration were recorded and then correlated with cellular morphological changes for each spot. Our results highlight the usefulness of the mathematical morphology to quantify kinetic changes in luminescence microscopy.

  12. Developing an ANSI standard for image quality tools for the testing of active millimeter wave imaging systems

    NASA Astrophysics Data System (ADS)

    Barber, Jeffrey; Greca, Joseph; Yam, Kevin; Weatherall, James C.; Smith, Peter R.; Smith, Barry T.

    2017-05-01

    In 2016, the millimeter wave (MMW) imaging community initiated the formation of a standard for millimeter wave image quality metrics. This new standard, American National Standards Institute (ANSI) N42.59, will apply to active MMW systems for security screening of humans. The Electromagnetic Signatures of Explosives Laboratory at the Transportation Security Laboratory is supporting the ANSI standards process via the creation of initial prototypes for round-robin testing with MMW imaging system manufacturers and experts. Results obtained for these prototypes will be used to inform the community and lead to consensus objective standards amongst stakeholders. Images collected with laboratory systems are presented along with results of preliminary image analysis. Future directions for object design, data collection and image processing are discussed.

  13. One Size Fits All: Evaluation of the Transferability of a New "Learning" Histologic Image Analysis Application.

    PubMed

    Arlt, Janine; Homeyer, André; Sänger, Constanze; Dahmen, Uta; Dirsch, Olaf

    2016-01-01

    Quantitative analysis of histologic slides is of importance for pathology and also to address surgical questions. Recently, a novel application was developed for the automated quantification of whole-slide images. The aim of this study was to test and validate the underlying image analysis algorithm with respect to user friendliness, accuracy, and transferability to different histologic scenarios. The algorithm splits the images into tiles of a predetermined size and identifies the tissue class of each tile. In the training procedure, the user specifies example tiles of the different tissue classes. In the subsequent analysis procedure, the algorithm classifies each tile into the previously specified classes. User friendliness was evaluated by recording training time and testing reproducibility of the training procedure of users with different background. Accuracy was determined with respect to single and batch analysis. Transferability was demonstrated by analyzing tissue of different organs (rat liver, kidney, small bowel, and spleen) and with different stainings (glutamine synthetase and hematoxylin-eosin). Users of different educational background could apply the program efficiently after a short introduction. When analyzing images with similar properties, accuracy of >90% was reached in single images as well as in batch mode. We demonstrated that the novel application is user friendly and very accurate. With the "training" procedure the application can be adapted to novel image characteristics simply by giving examples of relevant tissue structures. Therefore, it is suitable for the fast and efficient analysis of high numbers of fully digitalized histologic sections, potentially allowing "high-throughput" quantitative "histomic" analysis.

  14. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  15. Histopathology mapping of biochemical changes in myocardial infarction by Fourier transform infrared spectral imaging.

    PubMed

    Yang, Tian T; Weng, Shi F; Zheng, Na; Pan, Qing H; Cao, Hong L; Liu, Liang; Zhang, Hai D; Mu, Da W

    2011-04-15

    Fourier transform infrared (FTIR) imaging and microspectroscopy have been extensively applied in the identification and investigation of both healthy and diseased tissues. FTIR imaging can be used to determine the biodistribution of several molecules of interest (carbohydrates, lipids, proteins) for tissue analysis, without the need for prior staining of these tissues. Molecular structure data, such as protein secondary structure and collagen triple helix exhibits, can also be obtained from the same analysis. Thus, several histopathological lesions, for example myocardial infarction, can be identified from FTIR-analyzed tissue images, the latter which can allow for more accurate discrimination between healthy tissues and pathological lesions. Accordingly, we propose FTIR imaging as a new tool integrating both molecular and histopathological assessment to investigate the degree of pathological changes in tissues. In this study, myocardial infarction is presented as an illustrative example of the wide potential of FTIR imaging for biomedical applications. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  17. Optical coherence tomography imaging based on non-harmonic analysis

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2009-11-01

    A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.

  18. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  19. Mineralogical Mapping of Asteroid Itokawa using Calibrated Hayabusa AMICA images and NIRS Spectrometer Data

    NASA Astrophysics Data System (ADS)

    Le Corre, Lucille; Becker, Kris J.; Reddy, Vishnu; Li, Jian-Yang; Bhatt, Megha

    2016-10-01

    The goal of our work is to restore data from the Hayabusa spacecraft that is available in the Planetary Data System (PDS) Small Bodies Node. More specifically, our objectives are to radiometrically calibrate and photometrically correct AMICA (Asteroid Multi-Band Imaging Camera) images of Itokawa. The existing images archived in the PDS are not in reflectance and not corrected from the effect of viewing geometry. AMICA images are processed with the Integrated Software for Imagers and Spectrometers (ISIS) system from USGS, widely used for planetary image analysis. The processing consists in the ingestion of the images in ISIS (amica2isis), updates to AMICA start time (sumspice), radiometric calibration (amicacal) including smear correction, applying SPICE ephemeris, adjusting control using Gaskell SUMFILEs (sumspice), projecting individual images (cam2map) and creating global or local mosaics. The application amicacal has also an option to remove pixels corresponding to the polarizing filters on the left side of the image frame. The amicacal application will include a correction for the Point Spread Function. The last version of the PSF published by Ishiguro et al. in 2014 includes correction for the effect of scattered light. This effect is important to correct because it can add 10% level in error and is affecting mostly the longer wavelength filters such as zs and p. The Hayabusa team decided to use the color data for six of the filters for scientific analysis after correcting for the scattered light. We will present calibrated data in I/F for all seven AMICA color filters. All newly implemented ISIS applications and map projections from this work have been or will be distributed to the community via ISIS public releases. We also processed the NIRS spectrometer data, and we will perform photometric modeling, then apply photometric corrections, and finally extract mineralogical parameters. The end results will be the creation of pyroxene chemistry and olivine/pyroxene ratio maps of Itokawa using NIRS and AMICA map products. All the products from this work will be archived on the PDS website. This work was supported by NASA Planetary Missions Data Analysis Program grant NNX13AP27G.

  20. Mathematical analysis of the 1D model and reconstruction schemes for magnetic particle imaging

    NASA Astrophysics Data System (ADS)

    Erb, W.; Weinmann, A.; Ahlborg, M.; Brandt, C.; Bringout, G.; Buzug, T. M.; Frikel, J.; Kaethner, C.; Knopp, T.; März, T.; Möddel, M.; Storath, M.; Weber, A.

    2018-05-01

    Magnetic particle imaging (MPI) is a promising new in vivo medical imaging modality in which distributions of super-paramagnetic nanoparticles are tracked based on their response in an applied magnetic field. In this paper we provide a mathematical analysis of the modeled MPI operator in the univariate situation. We provide a Hilbert space setup, in which the MPI operator is decomposed into simple building blocks and in which these building blocks are analyzed with respect to their mathematical properties. In turn, we obtain an analysis of the MPI forward operator and, in particular, of its ill-posedness properties. We further get that the singular values of the MPI core operator decrease exponentially. We complement our analytic results by some numerical studies which, in particular, suggest a rapid decay of the singular values of the MPI operator.

  1. Micro-computed tomography imaging and analysis in developmental biology and toxicology.

    PubMed

    Wise, L David; Winkelmann, Christopher T; Dogdas, Belma; Bagchi, Ansuman

    2013-06-01

    Micro-computed tomography (micro-CT) is a high resolution imaging technique that has expanded and strengthened in use since it was last reviewed in this journal in 2004. The technology has expanded to include more detailed analysis of bone, as well as soft tissues, by use of various contrast agents. It is increasingly applied to questions in developmental biology and developmental toxicology. Relatively high-throughput protocols now provide a powerful and efficient means to evaluate embryos and fetuses subjected to genetic manipulations or chemical exposures. This review provides an overview of the technology, including scanning, reconstruction, visualization, segmentation, and analysis of micro-CT generated images. This is followed by a review of more recent applications of the technology in some common laboratory species that highlight the diverse issues that can be addressed. Copyright © 2013 Wiley Periodicals, Inc.

  2. Towards an Analysis of Visual Images in School Science Textbooks and Press Articles about Science and Technology

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Kostas; Koulaidis, Vasilis; Sklaveniti, Spyridoula

    2003-04-01

    This paper aims at presenting the application of a grid for the analysis of the pedagogic functions of visual images included in school science textbooks and daily press articles about science and technology. The analysis is made using the dimensions of content specialisation (classification) and social-pedagogic relationships (framing) promoted by the images as well as the elaboration and abstraction of the corresponding visual code (formality), thus combining pedagogical and socio-semiotic perspectives. The grid is applied to the analysis of 2819 visual images collected from school science textbooks and another 1630 visual images additionally collected from the press. The results show that the science textbooks in comparison to the press material: a) use ten times more images, b) use more images so as to familiarise their readers with the specialised techno-scientific content and codes, and c) tend to create a sense of higher empowerment for their readers by using the visual mode. Furthermore, as the educational level of the school science textbooks (i.e., from primary to lower secondary level) rises, the content specialisation projected by the visual images and the elaboration and abstraction of the corresponding visual code also increases. The above results have implications for the terms and conditions for the effective exploitation of visual material as the educational level rises as well as for the effective incorporation of visual images from press material into science classes.

  3. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  4. Comparative study on fluorescence spectra of Chinese medicine north and south isatis root granules

    NASA Astrophysics Data System (ADS)

    Liang, Lan; He, Qing; Chen, Zhenqiang; Zhu, Siqi

    2016-03-01

    Since the spectral imaging technology emerged, it has gained a lot of application achievements in the military field, precision agriculture and biomedical science. When the fluorescence spectrum imaging first applied to the detection of the feature resource of Chinese herbal medicine, the characteristics of holistic and ambiguity made it a new approach to the traditional Chinese medicine testing. In this paper, we applied this method to study the Chinese medicine north and south isatis root granules by comparing their fluorescence spectra. Using cluster analysis, the results showed that the north and south Banlangen can not be divided by ascription. And these indicate that there is a large difference in the quality of Banlangen granules on the market, and fluorescence spectrum imaging method can be used in monitoring the quality of radix isatidis granules.

  5. Forensic Science Research and Development at the National Institute of Justice: Opportunities in Applied Physics

    NASA Astrophysics Data System (ADS)

    Dutton, Gregory

    Forensic science is a collection of applied disciplines that draws from all branches of science. A key question in forensic analysis is: to what degree do a piece of evidence and a known reference sample share characteristics? Quantification of similarity, estimation of uncertainty, and determination of relevant population statistics are of current concern. A 2016 PCAST report questioned the foundational validity and the validity in practice of several forensic disciplines, including latent fingerprints, firearms comparisons and DNA mixture interpretation. One recommendation was the advancement of objective, automated comparison methods based on image analysis and machine learning. These concerns parallel the National Institute of Justice's ongoing R&D investments in applied chemistry, biology and physics. NIJ maintains a funding program spanning fundamental research with potential for forensic application to the validation of novel instruments and methods. Since 2009, NIJ has funded over 179M in external research to support the advancement of accuracy, validity and efficiency in the forensic sciences. An overview of NIJ's programs will be presented, with examples of relevant projects from fluid dynamics, 3D imaging, acoustics, and materials science.

  6. TH-CD-202-01: BEST IN PHYSICS (JOINT IMAGING-THERAPY): Evaluation of the Use of Direct Electron Density CT Images in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Sun, B; Li, H

    Purpose: The current standard for calculation of photon and electron dose requires conversion of Hounsfield Units (HU) to Electron Density (ED) by applying a calibration curve specifically constructed for the corresponding CT tube voltage. This practice limits the use of the CT scanner to a single tube voltage and hinders the freedom in the selection of optimal tube voltage for better image quality. The objective of this study is to report a prototype CT reconstruction algorithm that provides direct ED images from the raw CT data independently of tube voltages used during acquisition. Methods: A tissue substitute phantom was scannedmore » for Stoichiometric CT calibrations at tube voltages of 70kV, 80kV, 100kV, 120kV and 140kV respectively. HU images and direct ED images were acquired sequentially on a thoracic anthropomorphic phantom at the same tube voltages. Electron densities converted from the HU images were compared to ED obtained from the direct ED images. A 7-field treatment plan was made on all HU and ED images. Gamma analysis was performed to demonstrate quantitatively dosimetric change from the two schemes in acquiring ED. Results: The average deviation of EDs obtained from the direct ED images was −1.5%±2.1% from the EDs from HU images with the corresponding CT calibration curves applied. Gamma analysis on dose calculated on the direct ED images and the HU images acquired at the same tube voltage indicated negligible difference with lowest passing rate at 99.9%. Conclusion: Direct ED images require no CT calibration while demonstrate equivalent dosimetry compared to that obtained from standard HU images. The ability of acquiring direct ED images simplifies the current practice at a safer level by eliminating CT calibration and HU conversion from commissioning and treatment planning respectively. Furthermore, it unlocks a wider range of tube voltages in CT scanner for better imaging quality while maintaining similar dosimetric accuracy.« less

  7. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  8. Are patient specific meshes required for EIT head imaging?

    PubMed

    Jehl, Markus; Aristovich, Kirill; Faulkner, Mayo; Holder, David

    2016-06-01

    Head imaging with electrical impedance tomography (EIT) is usually done with time-differential measurements, to reduce time-invariant modelling errors. Previous research suggested that more accurate head models improved image quality, but no thorough analysis has been done on the required accuracy. We propose a novel pipeline for creation of precise head meshes from magnetic resonance imaging and computed tomography scans, which was applied to four different heads. Voltages were simulated on all four heads for perturbations of different magnitude, haemorrhage and ischaemia, in five different positions and for three levels of instrumentation noise. Statistical analysis showed that reconstructions on the correct mesh were on average 25% better than on the other meshes. However, the stroke detection rates were not improved. We conclude that a generic head mesh is sufficient for monitoring patients for secondary strokes following head trauma.

  9. Elemental X-ray Imaging Using the Maia Detector Array: The Benefits and Challenges of Large Solid-Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, C.G.; De Geronimo, G.; Kirkham, R.

    2009-11-13

    The fundamental parameter method for quantitative SXRF and PIXE analysis and imaging using the dynamic analysis method is extended to model the changing X-ray yields and detector sensitivity with angle across large detector arrays. The method is implemented in the GeoPIXE software and applied to cope with the large solid-angle of the new Maia 384 detector array and its 96 detector prototype developed by CSIRO and BNL for SXRF imaging applications at the Australian and NSLS synchrotrons. Peak-to-background is controlled by mitigating charge-sharing between detectors through careful optimization of a patterned molybdenum absorber mask. A geological application demonstrates the capabilitymore » of the method to produce high definition elemental images up to {approx}100 M pixels in size.« less

  10. Spatiotemporal Change Detection Using Landsat Imagery: the Case Study of Karacabey Flooded Forest, Bursa, Turkey

    NASA Astrophysics Data System (ADS)

    Akay, A. E.; Gencal, B.; Taş, İ.

    2017-11-01

    This short paper aims to detect spatiotemporal detection of land use/land cover change within Karacabey Flooded Forest region. Change detection analysis applied to Landsat 5 TM images representing July 2000 and a Landsat 8 OLI representing June 2017. Various image processing tools were implemented using ERDAS 9.2, ArcGIS 10.4.1, and ENVI programs to conduct spatiotemporal change detection over these two images such as band selection, corrections, subset, classification, recoding, accuracy assessment, and change detection analysis. Image classification revealed that there are five significant land use/land cover types, including forest, flooded forest, swamp, water, and other lands (i.e. agriculture, sand, roads, settlement, and open areas). The results indicated that there was increase in flooded forest, water, and other lands, while the cover of forest and swamp decreased.

  11. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application

    PubMed Central

    Maxwell, Susan K.

    2010-01-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917

  12. Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform

    NASA Astrophysics Data System (ADS)

    Mohd Asaari, Mohd Shahrimie; Mishra, Puneet; Mertens, Stien; Dhondt, Stijn; Inzé, Dirk; Wuyts, Nathalie; Scheunders, Paul

    2018-04-01

    The potential of close-range hyperspectral imaging (HSI) as a tool for detecting early drought stress responses in plants grown in a high-throughput plant phenotyping platform (HTPPP) was explored. Reflectance spectra from leaves in close-range imaging are highly influenced by plant geometry and its specific alignment towards the imaging system. This induces high uninformative variability in the recorded signals, whereas the spectral signature informing on plant biological traits remains undisclosed. A linear reflectance model that describes the effect of the distance and orientation of each pixel of a plant with respect to the imaging system was applied. By solving this model for the linear coefficients, the spectra were corrected for the uninformative illumination effects. This approach, however, was constrained by the requirement of a reference spectrum, which was difficult to obtain. As an alternative, the standard normal variate (SNV) normalisation method was applied to reduce this uninformative variability. Once the envisioned illumination effects were eliminated, the remaining differences in plant spectra were assumed to be related to changes in plant traits. To distinguish the stress-related phenomena from regular growth dynamics, a spectral analysis procedure was developed based on clustering, a supervised band selection, and a direct calculation of a spectral similarity measure against a reference. To test the significance of the discrimination between healthy and stressed plants, a statistical test was conducted using a one-way analysis of variance (ANOVA) technique. The proposed analysis techniques was validated with HSI data of maize plants (Zea mays L.) acquired in a HTPPP for early detection of drought stress in maize plant. Results showed that the pre-processing of reflectance spectra with the SNV effectively reduces the variability due to the expected illumination effects. The proposed spectral analysis method on the normalized spectra successfully detected drought stress from the third day of drought induction, confirming the potential of HSI for drought stress detection studies and further supporting its adoption in HTPPP.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cullen, David A; Koestner, Roland; Kukreja, Ratan

    Improved conditions for imaging and spectroscopic mapping of thin perfluorosulfonic acid (PFSA) ionomer layers in fuel cell electrodes by scanning transmission electron microscopy (STEM) have been investigated. These conditions are first identified on model systems of Nafion ionomer-coated nanostructured thin films and nanoporous Si. The optimized conditions are then applied in a quantitative study of the ionomer through-layer loading for two typical electrode catalyst coatings using electron energy loss and energy dispersive X-ray spectroscopy in the transmission electron microscope. The e-beam induced damage to the perfluorosulfonic acid (PFSA) ionomer is quantified by following the fluorine mass loss with electron exposuremore » and is then mitigated by a few orders of magnitude using cryogenic specimen cooling and a higher incident electron voltage. Multivariate statistical analysis is also applied to the analysis of spectrum images for data denoising and unbiased separation of independent components related to the catalyst, ionomer, and support.« less

  14. From plastic to gold: a unified classification scheme for reference standards in medical image processing

    NASA Astrophysics Data System (ADS)

    Lehmann, Thomas M.

    2002-05-01

    Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.

  15. Color image analysis technique for measuring of fat in meat: an application for the meat industry

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia; Hogberg, Anders; Lundstrom, Kerstin; Borgefors, Gunilla

    2001-04-01

    Intramuscular fat content in meat influences some important meat quality characteristics. The aim of the present study was to develop and apply image processing techniques to quantify intramuscular fat content in beefs together with the visual appearance of fat in meat (marbling). Color images of M. longissimus dorsi meat samples with a variability of intramuscular fat content and marbling were captured. Image analysis software was specially developed for the interpretation of these images. In particular, a segmentation algorithm (i.e. classification of different substances: fat, muscle and connective tissue) was optimized in order to obtain a proper classification and perform subsequent analysis. Segmentation of muscle from fat was achieved based on their characteristics in the 3D color space, and on the intrinsic fuzzy nature of these structures. The method is fully automatic and it combines a fuzzy clustering algorithm, the Fuzzy c-Means Algorithm, with a Genetic Algorithm. The percentages of various colors (i.e. substances) within the sample are then determined; the number, size distribution, and spatial distributions of the extracted fat flecks are measured. Measurements are correlated with chemical and sensory properties. Results so far show that advanced image analysis is useful for quantify the visual appearance of meat.

  16. Effect of slice thickness on brain magnetic resonance image texture analysis

    PubMed Central

    2010-01-01

    Background The accuracy of texture analysis in clinical evaluation of magnetic resonance images depends considerably on imaging arrangements and various image quality parameters. In this paper, we study the effect of slice thickness on brain tissue texture analysis using a statistical approach and classification of T1-weighted images of clinically confirmed multiple sclerosis patients. Methods We averaged the intensities of three consecutive 1-mm slices to simulate 3-mm slices. Two hundred sixty-four texture parameters were calculated for both the original and the averaged slices. Wilcoxon's signed ranks test was used to find differences between the regions of interest representing white matter and multiple sclerosis plaques. Linear and nonlinear discriminant analyses were applied with several separate training and test sets to determine the actual classification accuracy. Results Only moderate differences in distributions of the texture parameter value for 1-mm and simulated 3-mm-thick slices were found. Our study also showed that white matter areas are well separable from multiple sclerosis plaques even if the slice thickness differs between training and test sets. Conclusions Three-millimeter-thick magnetic resonance image slices acquired with a 1.5 T clinical magnetic resonance scanner seem to be sufficient for texture analysis of multiple sclerosis plaques and white matter tissue. PMID:20955567

  17. A forensic science perspective on the role of images in crime investigation and reconstruction.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Margot, Pierre

    2014-12-01

    This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Hyperspectral imaging coupled with chemometric analysis for non-invasive differentiation of black pens

    NASA Astrophysics Data System (ADS)

    Chlebda, Damian K.; Majda, Alicja; Łojewski, Tomasz; Łojewska, Joanna

    2016-11-01

    Differentiation of the written text can be performed with a non-invasive and non-contact tool that connects conventional imaging methods with spectroscopy. Hyperspectral imaging (HSI) is a relatively new and rapid analytical technique that can be applied in forensic science disciplines. It allows an image of the sample to be acquired, with full spectral information within every pixel. For this paper, HSI and three statistical methods (hierarchical cluster analysis, principal component analysis, and spectral angle mapper) were used to distinguish between traces of modern black gel pen inks. Non-invasiveness and high efficiency are among the unquestionable advantages of ink differentiation using HSI. It is also less time-consuming than traditional methods such as chromatography. In this study, a set of 45 modern gel pen ink marks deposited on a paper sheet were registered. The spectral characteristics embodied in every pixel were extracted from an image and analysed using statistical methods, externally and directly on the hypercube. As a result, different black gel inks deposited on paper can be distinguished and classified into several groups, in a non-invasive manner.

  20. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  1. Multiwavelet grading of prostate pathological images

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh

    2002-05-01

    We have developed image analysis methods to automatically grade pathological images of prostate. The proposed method generates Gleason grades to images, where each image is assigned a grade between 1 and 5. This is done using features extracted from multiwavelet transformations. We extract energy and entropy features from submatrices obtained in the decomposition. Next, we apply a k-NN classifier to grade the image. To find optimal multiwavelet basis, preprocessing, and classifier, we use features extracted by different multiwavelets with either critically sampled preprocessing or repeated row preprocessing and different k-NN classifiers and compare their performances, evaluated by total misclassification rate (TMR). To evaluate sensitivity to noise, we add white Gaussian noise to images and compare the results (TMR's). We applied proposed methods to 100 images. We evaluated the first and second levels of decomposition using Geronimo, Hardin, and Massopust (GHM), Chui and Lian (CL), and Shen (SA4) multiwavelets. We also evaluated k-NN classifier for k=1,2,3,4,5. Experimental results illustrate that first level of decomposition is quite noisy. They also show that critically sampled preprocessing outperforms repeated row preprocessing and has less sensitivity to noise. Finally, comparison studies indicate that SA4 multiwavelet and k-NN classifier (k=1) generates optimal results (with smallest TMR of 3%).

  2. The Mathematics of Four or More N-Localizers for Stereotactic Neurosurgery.

    PubMed

    Brown, Russell A

    2015-10-13

    The mathematics that were originally developed for the N-localizer apply to three N-localizers that produce three sets of fiducials in a tomographic image. Some applications of the N-localizer use four N-localizers that produce four sets of fiducials; however, the mathematics that apply to three sets of fiducials do not apply to four sets of fiducials. This article presents mathematics that apply to four or more sets of fiducials that all lie within one planar tomographic image. In addition, these mathematics are extended to apply to four or more fiducials that do not all lie within one planar tomographic image, as may be the case with magnetic resonance (MR) imaging where a volume is imaged instead of a series of planar tomographic images. Whether applied to a planar image or a volume image, the mathematics of four or more N-localizers provide a statistical measure of the quality of the image data that may be influenced by factors, such as the nonlinear distortion of MR images.

  3. The Graduate Training Programme "Molecular Imaging for the Analysis of Gene and Protein Expression": A Case Study with an Insight into the Participation of Universities of Applied Sciences

    ERIC Educational Resources Information Center

    Hafner, Mathias

    2008-01-01

    Cell biology and molecular imaging technologies have made enormous progress in basic research. However, the transfer of this knowledge to the pharmaceutical drug discovery process, or even therapeutic improvements for disorders such as neuronal diseases, is still in its infancy. This transfer needs scientists who can integrate basic research with…

  4. Image-Subtraction Photometry of Variable Stars in the Globular Clusters NGC 6388 and NGC 6441

    NASA Technical Reports Server (NTRS)

    Corwin, Michael T.; Sumerel, Andrew N.; Pritzl, Barton J.; Smith, Horace A.; Catelan, M.; Sweigart, Allen V.; Stetson, Peter B.

    2006-01-01

    We have applied Alard's image subtraction method (ISIS v2.1) to the observations of the globular clusters NGC 6388 and NGC 6441 previously analyzed using standard photometric techniques (DAOPHOT, ALLFRAME). In this reanalysis of observations obtained at CTIO, besides recovering the variables previously detected on the basis of our ground-based images, we have also been able to recover most of the RR Lyrae variables previously detected only in the analysis of Hubble Space Telescope WFPC2 observations of the inner region of NGC 6441. In addition, we report five possible new variables not found in the analysis of the EST observations of NGC 6441. This dramatically illustrates the capabilities of image subtraction techniques applied to ground-based data to recover variables in extremely crowded fields. We have also detected twelve new variables and six possible variables in NGC 6388 not found in our previous groundbased studies. Revised mean periods for RRab stars in NGC 6388 and NGC 6441 are 0.676 day and 0.756 day, respectively. These values are among the largest known for any galactic globular cluster. Additional probable type II Cepheids were identified in NGC 6388, confirming its status as a metal-rich globular cluster rich in Cepheids.

  5. Texture feature extraction based on wavelet transform and gray-level co-occurrence matrices applied to osteosarcoma diagnosis.

    PubMed

    Hu, Shan; Xu, Chao; Guan, Weiqiao; Tang, Yong; Liu, Yana

    2014-01-01

    Osteosarcoma is the most common malignant bone tumor among children and adolescents. In this study, image texture analysis was made to extract texture features from bone CR images to evaluate the recognition rate of osteosarcoma. To obtain the optimal set of features, Sym4 and Db4 wavelet transforms and gray-level co-occurrence matrices were applied to the image, with statistical methods being used to maximize the feature selection. To evaluate the performance of these methods, a support vector machine algorithm was used. The experimental results demonstrated that the Sym4 wavelet had a higher classification accuracy (93.44%) than the Db4 wavelet with respect to osteosarcoma occurrence in the epiphysis, whereas the Db4 wavelet had a higher classification accuracy (96.25%) for osteosarcoma occurrence in the diaphysis. Results including accuracy, sensitivity, specificity and ROC curves obtained using the wavelets were all higher than those obtained using the features derived from the GLCM method. It is concluded that, a set of texture features can be extracted from the wavelets and used in computer-aided osteosarcoma diagnosis systems. In addition, this study also confirms that multi-resolution analysis is a useful tool for texture feature extraction during bone CR image processing.

  6. Surface inspection of flat products by means of texture analysis: on-line implementation using neural networks

    NASA Astrophysics Data System (ADS)

    Fernandez, Carlos; Platero, Carlos; Campoy, Pascual; Aracil, Rafael

    1994-11-01

    This paper describes some texture-based techniques that can be applied to quality assessment of flat products continuously produced (metal strips, wooden surfaces, cork, textile products, ...). Since the most difficult task is that of inspecting for product appearance, human-like inspection ability is required. A common feature to all these products is the presence of non- deterministic texture on their surfaces. Two main subjects are discussed: statistical techniques for both surface finishing determination and surface defect analysis as well as real-time implementation for on-line inspection in high-speed applications. For surface finishing determination a Gray Level Difference technique is presented to perform over low resolution images, that is, no-zoomed images. Defect analysis is performed by means of statistical texture analysis over defective portions of the surface. On-line implementation is accomplished by means of neural networks. When a defect arises, textural analysis is applied which result in a data-vector, acting as input of a neural net, previously trained in a supervised way. This approach tries to reach on-line performance in automated visual inspection applications when texture is presented in flat product surfaces.

  7. Near-Surface Flow Fields Deduced Using Correlation Tracking and Time-Distance Analysis

    NASA Technical Reports Server (NTRS)

    DeRosa, Marc; Duvall, T. L., Jr.; Toomre, Juri

    1999-01-01

    Near-photospheric flow fields on the Sun are deduced using two independent methods applied to the same time series of velocity images observed by SOI-MDI on SOHO. Differences in travel times between f modes entering and leaving each pixel measured using time-distance helioseismology are used to determine sites of supergranular outflows. Alternatively, correlation tracking analysis of mesogranular scales of motion applied to the same time series is used to deduce the near-surface flow field. These two approaches provide the means to assess the patterns and evolution of horizontal flows on supergranular scales even near disk center, which is not feasible with direct line-of-sight Doppler measurements. We find that the locations of the supergranular outflows seen in flow fields generated from correlation tracking coincide well with the locations of the outflows determined from the time-distance analysis, with a mean correlation coefficient after smoothing of bar-r(sub s) = 0.840. Near-surface velocity field measurements can used to study the evolution of the supergranular network, as merging and splitting events are observed to occur in these images. The data consist of one 2048-minute time series of high-resolution (0.6" pixels) line-of-sight velocity images taken by MDI on 1997 January 16-18 at a cadence of one minute.

  8. Seismic zonation of Port-Au-Prince using pixel- and object-based imaging analysis methods on ASTER GDEM

    USGS Publications Warehouse

    Yong, Alan; Hough, Susan E.; Cox, Brady R.; Rathje, Ellen M.; Bachhuber, Jeff; Dulberg, Ranon; Hulslander, David; Christiansen, Lisa; and Abrams, Michael J.

    2011-01-01

    We report about a preliminary study to evaluate the use of semi-automated imaging analysis of remotely-sensed DEM and field geophysical measurements to develop a seismic-zonation map of Port-au-Prince, Haiti. For in situ data, VS30 values are derived from the MASW technique deployed in and around the city. For satellite imagery, we use an ASTER GDEM of Hispaniola. We apply both pixel- and object-based imaging methods on the ASTER GDEM to explore local topography (absolute elevation values) and classify terrain types such as mountains, alluvial fans and basins/near-shore regions. We assign NEHRP seismic site class ranges based on available VS30 values. A comparison of results from imagery-based methods to results from traditional geologic-based approaches reveals good overall correspondence. We conclude that image analysis of RS data provides reliable first-order site characterization results in the absence of local data and can be useful to refine detailed site maps with sparse local data.

  9. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  10. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  11. Autonomous stress imaging cores: from concept to reality

    NASA Astrophysics Data System (ADS)

    van der Velden, Stephen; Rajic, Nik; Brooks, Chris; Galea, Steve

    2016-04-01

    The historical reliance of thermoelastic stress analysis on cooled infrared detection has created significant cost and practical impediments to the widespread use of this powerful full-field stress measurement technique. The emergence of low-cost microbolometers as a practical alternative has allowed for an expansion of the traditional role of thermoelastic stress analysis, and raises the possibility that it may in future become a viable structural health monitoring modality. Experimental results are shown to confirm that high resolution stress imagery can be obtained from an uncooled thermal camera core significantly smaller than any infrared imaging device previously applied to TSA. The paper provides a summary of progress toward the development of an autonomous stress-imaging capability based on this core.

  12. Quantitative image analysis for investigating cell-matrix interactions

    NASA Astrophysics Data System (ADS)

    Burkel, Brian; Notbohm, Jacob

    2017-07-01

    The extracellular matrix provides both chemical and physical cues that control cellular processes such as migration, division, differentiation, and cancer progression. Cells can mechanically alter the matrix by applying forces that result in matrix displacements, which in turn may localize to form dense bands along which cells may migrate. To quantify the displacements, we use confocal microscopy and fluorescent labeling to acquire high-contrast images of the fibrous material. Using a technique for quantitative image analysis called digital volume correlation, we then compute the matrix displacements. Our experimental technology offers a means to quantify matrix mechanics and cell-matrix interactions. We are now using these experimental tools to modulate mechanical properties of the matrix to study cell contraction and migration.

  13. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  14. Segmentation of medical images using explicit anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  15. Automated vessel segmentation using cross-correlation and pooled covariance matrix analysis.

    PubMed

    Du, Jiang; Karimi, Afshin; Wu, Yijing; Korosec, Frank R; Grist, Thomas M; Mistretta, Charles A

    2011-04-01

    Time-resolved contrast-enhanced magnetic resonance angiography (CE-MRA) provides contrast dynamics in the vasculature and allows vessel segmentation based on temporal correlation analysis. Here we present an automated vessel segmentation algorithm including automated generation of regions of interest (ROIs), cross-correlation and pooled sample covariance matrix analysis. The dynamic images are divided into multiple equal-sized regions. In each region, ROIs for artery, vein and background are generated using an iterative thresholding algorithm based on the contrast arrival time map and contrast enhancement map. Region-specific multi-feature cross-correlation analysis and pooled covariance matrix analysis are performed to calculate the Mahalanobis distances (MDs), which are used to automatically separate arteries from veins. This segmentation algorithm is applied to a dual-phase dynamic imaging acquisition scheme where low-resolution time-resolved images are acquired during the dynamic phase followed by high-frequency data acquisition at the steady-state phase. The segmented low-resolution arterial and venous images are then combined with the high-frequency data in k-space and inverse Fourier transformed to form the final segmented arterial and venous images. Results from volunteer and patient studies demonstrate the advantages of this automated vessel segmentation and dual phase data acquisition technique. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Systems-level analysis of microbial community organization through combinatorial labeling and spectral imaging.

    PubMed

    Valm, Alex M; Mark Welch, Jessica L; Rieken, Christopher W; Hasegawa, Yuko; Sogin, Mitchell L; Oldenbourg, Rudolf; Dewhirst, Floyd E; Borisy, Gary G

    2011-03-08

    Microbes in nature frequently function as members of complex multitaxon communities, but the structural organization of these communities at the micrometer level is poorly understood because of limitations in labeling and imaging technology. We report here a combinatorial labeling strategy coupled with spectral image acquisition and analysis that greatly expands the number of fluorescent signatures distinguishable in a single image. As an imaging proof of principle, we first demonstrated visualization of Escherichia coli labeled by fluorescence in situ hybridization (FISH) with 28 different binary combinations of eight fluorophores. As a biological proof of principle, we then applied this Combinatorial Labeling and Spectral Imaging FISH (CLASI-FISH) strategy using genus- and family-specific probes to visualize simultaneously and differentiate 15 different phylotypes in an artificial mixture of laboratory-grown microbes. We then illustrated the utility of our method for the structural analysis of a natural microbial community, namely, human dental plaque, a microbial biofilm. We demonstrate that 15 taxa in the plaque community can be imaged simultaneously and analyzed and that this community was dominated by early colonizers, including species of Streptococcus, Prevotella, Actinomyces, and Veillonella. Proximity analysis was used to determine the frequency of inter- and intrataxon cell-to-cell associations which revealed statistically significant intertaxon pairings. Cells of the genera Prevotella and Actinomyces showed the most interspecies associations, suggesting a central role for these genera in establishing and maintaining biofilm complexity. The results provide an initial systems-level structural analysis of biofilm organization.

  17. Observation of a Large Landslide on La Reunion Island Using Differential Sar Interferometry (JERS and Radarsat) and Correlation of Optical (Spot5 and Aerial) Images

    PubMed Central

    Delacourt, Christophe; Raucoules, Daniel; Le Mouélic, Stéphane; Carnec, Claudie; Feurer, Denis; Allemand, Pascal; Cruchet, Marc

    2009-01-01

    Slope instabilities are one of the most important geo-hazards in terms of socio-economic costs. The island of La Réunion (Indian Ocean) is affected by constant slope movements and huge landslides due to a combination of rough topography, wet tropical climate and its specific geological context. We show that remote sensing techniques (Differential SAR Interferometry and correlation of optical images) provide complementary means to characterize landslides on a regional scale. The vegetation cover generally hampers the analysis of C–band interferograms. We used JERS-1 images to show that the L-band can be used to overcome the loss of coherence observed in Radarsat C-band interferograms. Image correlation was applied to optical airborne and SPOT 5 sensors images. The two techniques were applied to a landslide near the town of Hellbourg in order to assess their performance for detecting and quantifying the ground motion associated to this landslide. They allowed the mapping of the unstable areas. Ground displacement of about 0.5 m yr-1 was measured. PMID:22389620

  18. Ship Speed Retrieval From Single Channel TerraSAR-X Data

    NASA Astrophysics Data System (ADS)

    Soccorsi, Matteo; Lehner, Susanne

    2010-04-01

    A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.

  19. Observation of a Large Landslide on La Reunion Island Using Differential Sar Interferometry (JERS and Radarsat) and Correlation of Optical (Spot5 and Aerial) Images.

    PubMed

    Delacourt, Christophe; Raucoules, Daniel; Le Mouélic, Stéphane; Carnec, Claudie; Feurer, Denis; Allemand, Pascal; Cruchet, Marc

    2009-01-01

    Slope instabilities are one of the most important geo-hazards in terms of socio-economic costs. The island of La Réunion (Indian Ocean) is affected by constant slope movements and huge landslides due to a combination of rough topography, wet tropical climate and its specific geological context. We show that remote sensing techniques (Differential SAR Interferometry and correlation of optical images) provide complementary means to characterize landslides on a regional scale. The vegetation cover generally hampers the analysis of C-band interferograms. We used JERS-1 images to show that the L-band can be used to overcome the loss of coherence observed in Radarsat C-band interferograms. Image correlation was applied to optical airborne and SPOT 5 sensors images. The two techniques were applied to a landslide near the town of Hellbourg in order to assess their performance for detecting and quantifying the ground motion associated to this landslide. They allowed the mapping of the unstable areas. Ground displacement of about 0.5 m yr(-1) was measured.

  20. Analysis and correction of Landsat 4 and 5 Thematic Mapper Sensor Data

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Hanson, W. A.

    1985-01-01

    Procedures for the correction and registration and registration of Landsat TM image data are examined. The registration of Landsat-4 TM images of San Francisco to Landsat-5 TM images of the San Francisco using the interactive geometric correction program and the cross-correlation technique is described. The geometric correction program and cross-correlation results are presented. The corrections of the TM data to a map reference and to a cartographic database are discussed; geometric and cartographic analyses are applied to the registration results.

  1. Using the Optical Mouse Sensor as a Two-Euro Counterfeit Coin Detector

    PubMed Central

    Tresanchez, Marcel; Pallejà, Tomàs; Teixidó, Mercè; Palacín, Jordi

    2009-01-01

    In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user. PMID:22399987

  2. High-speed railway signal trackside equipment patrol inspection system

    NASA Astrophysics Data System (ADS)

    Wu, Nan

    2018-03-01

    High-speed railway signal trackside equipment patrol inspection system comprehensively applies TDI (time delay integration), high-speed and highly responsive CMOS architecture, low illumination photosensitive technique, image data compression technique, machine vision technique and so on, installed on high-speed railway inspection train, and achieves the collection, management and analysis of the images of signal trackside equipment appearance while the train is running. The system will automatically filter out the signal trackside equipment images from a large number of the background image, and identify of the equipment changes by comparing the original image data. Combining with ledger data and train location information, the system accurately locate the trackside equipment, conscientiously guiding maintenance.

  3. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  4. Digital imaging biomarkers feed machine learning for melanoma screening.

    PubMed

    Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G

    2017-07-01

    We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.

  5. Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop

    NASA Technical Reports Server (NTRS)

    Vane, G. (Editor); Goetz, A. F. H. (Editor)

    1985-01-01

    The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.

  6. Dual FIB-SEM 3D imaging and lattice boltzmann modeling of porosimetry and multiphase flow in chalk.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rinehart, Alex; Petrusak, Robin; Heath, Jason E.

    2010-12-01

    Mercury intrusion porosimetry (MIP) is an often-applied technique for determining pore throat distributions and seal analysis of fine-grained rocks. Due to closure effects, potential pore collapse, and complex pore network topologies, MIP data interpretation can be ambiguous, and often biased toward smaller pores in the distribution. We apply 3D imaging techniques and lattice-Boltzmann modeling in interpreting MIP data for samples of the Cretaceous Selma Group Chalk. In the Mississippi Interior Salt Basin, the Selma Chalk is the apparent seal for oil and gas fields in the underlying Eutaw Fm., and, where unfractured, the Selma Chalk is one of the regional-scalemore » seals identified by the Southeast Regional Carbon Sequestration Partnership for CO2 injection sites. Dual focused ion - scanning electron beam and laser scanning confocal microscopy methods are used for 3D imaging of nanometer-to-micron scale microcrack and pore distributions in the Selma Chalk. A combination of image analysis software is used to obtain geometric pore body and throat distributions and other topological properties, which are compared to MIP results. 3D data sets of pore-microfracture networks are used in Lattice Boltzmann simulations of drainage (wetting fluid displaced by non-wetting fluid via the Shan-Chen algorithm), which in turn are used to model MIP procedures. Results are used in interpreting MIP results, understanding microfracture-matrix interaction during multiphase flow, and seal analysis for underground CO2 storage.« less

  7. Mapping and spatiotemporal characterization of degraded forests in the Brazilian Amazon through remote sensing

    NASA Astrophysics Data System (ADS)

    de Souza, Carlos Moreira, Jr.

    Large forested areas have recently been impoverished by degradation caused by selective logging, forest fires and fragmentation in the Amazon region, causing partial change of the original forest structure and composition. As opposed to deforestation that has been monitored with Landsat images since the late 70's, degraded forests have not been monitored in the Amazon region. In this dissertation, remote sensing techniques for identifying and mapping unambiguously degraded forests with Landsat images are proposed. The test area was the region of Sinop, located in the state of Mato Grosso, Brazil. This region was selected because a gradient of degraded forest environments exist and a robust time-series of Landsat images and forest transect data were available. First, statistical analyses were applied to identify the best set of spectral information extracted from Landsat images to detect several types of degraded forest environments. Fraction images derived from Spectral Mixture Analysis (SMA) were the best type of information for that purpose. A new spectral index based on fraction images---Normalized Difference Fraction Index (NDFI)---was proposed to enhance the detection of canopy damaged areas in degraded forests. Second, a contextual classification algorithm was implemented to separate unambiguously forest degradation caused by anthropogenic activities from natural forest disturbances. These techniques were validated using forest transects and high resolution aerial videography images and proved to be highly accurate. Next, these techniques were applied to a time-series data set of Landsat images, encompassing 20 years, to evaluate the relationship between forest degradation and deforestation. The most important finding of the forest change detection analysis was that forest degradation and deforestation are independent events in the study area, making worse the current forest impacts in the Amazon region. Finally, the techniques developed and tested in the Sinop region were successfully applied to forty Landsat images covering other regions of the Brazilian Amazon. Standard fractions and NDFI images were computed for these other regions and both physically and spatially consistent results were obtained. An automated decision tree classification using genetic algorithm was implemented successfully to classify land cover types and sub-classes of degraded forests. The remote sensing techniques proposed in this dissertation are fully automated and have the potential to be used in tropical forest monitoring programs.

  8. Monitoring the distribution of prompt gamma rays in boron neutron capture therapy using a multiple-scattering Compton camera: A Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Lee, Taewoong; Lee, Hyounggun; Lee, Wonho

    2015-10-01

    This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.

  9. End-to-end performance analysis using engineering confidence models and a ground processor prototype

    NASA Astrophysics Data System (ADS)

    Kruse, Klaus-Werner; Sauer, Maximilian; Jäger, Thomas; Herzog, Alexandra; Schmitt, Michael; Huchler, Markus; Wallace, Kotska; Eisinger, Michael; Heliere, Arnaud; Lefebvre, Alain; Maher, Mat; Chang, Mark; Phillips, Tracy; Knight, Steve; de Goeij, Bryan T. G.; van der Knaap, Frits; Van't Hof, Adriaan

    2015-10-01

    The European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) are co-operating to develop the EarthCARE satellite mission with the fundamental objective of improving the understanding of the processes involving clouds, aerosols and radiation in the Earth's atmosphere. The EarthCARE Multispectral Imager (MSI) is relatively compact for a space borne imager. As a consequence, the immediate point-spread function (PSF) of the instrument will be mainly determined by the diffraction caused by the relatively small optical aperture. In order to still achieve a high contrast image, de-convolution processing is applied to remove the impact of diffraction on the PSF. A Lucy-Richardson algorithm has been chosen for this purpose. This paper will describe the system setup and the necessary data pre-processing and post-processing steps applied in order to compare the end-to-end image quality with the L1b performance required by the science community.

  10. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  11. MDI: integrity index of cytoskeletal fibers observed by AFM

    NASA Astrophysics Data System (ADS)

    Manghi, Massimo; Bruni, Luca; Croci, Simonetta

    2016-06-01

    The Modified Directional Index (MDI) is a form factor of the angular spectrum computed from the 2D Fourier transform of an image marking the prevalence of rectilinear features throughout the picture. We study some properties of the index and we apply it to AFM images of cell cytoskeleton regions featuring patterns of rectilinear nearly parallel actin filaments as in the case of microfilaments grouped in bundles. The analysis of AFM images through MDI calculation quantifies the fiber directionality changes which could be related to fiber damages. This parameter is applied to the images of Hs 578Bst cell line, non-tumoral and not immortalized human epithelial cell line, irradiated with X-rays at doses equivalent to typical radiotherapy treatment fractions. In the reported samples, we could conclude that the damages are mainly born to the membrane and not to the cytoskeleton. It could be interesting to test the parameter also using other kinds of chemical or physical agents.

  12. Chip-based generation of carbon nanodots via electrochemical oxidation of screen printed carbon electrodes and the applications for efficient cell imaging and electrochemiluminescence enhancement

    NASA Astrophysics Data System (ADS)

    Xu, Yuanhong; Liu, Jingquan; Zhang, Jizhen; Zong, Xidan; Jia, Xiaofang; Li, Dan; Wang, Erkang

    2015-05-01

    A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips.A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips. Electronic supplementary information (ESI) available: Experimental section; Fig. S1. XPS spectra of the as-prepared CNDs after being dialyzed for 72 hours; Fig. S2. LSCM images showing time-dependent fluorescence signals of HeLa cells treated by the as-prepared CNDs; Tripropylamine analysis using the Nafion/CNDs modified ECL sensor. See DOI: 10.1039/c5nr01765c

  13. Imaging and elemental mapping of biological specimens with a dual-EDS dedicated scanning transmission electron microscope.

    PubMed

    Wu, J S; Kim, A M; Bleher, R; Myers, B D; Marvin, R G; Inada, H; Nakamura, K; Zhang, X F; Roth, E; Li, S Y; Woodruff, T K; O'Halloran, T V; Dravid, Vinayak P

    2013-05-01

    A dedicated analytical scanning transmission electron microscope (STEM) with dual energy dispersive spectroscopy (EDS) detectors has been designed for complementary high performance imaging as well as high sensitivity elemental analysis and mapping of biological structures. The performance of this new design, based on a Hitachi HD-2300A model, was evaluated using a variety of biological specimens. With three imaging detectors, both the surface and internal structure of cells can be examined simultaneously. The whole-cell elemental mapping, especially of heavier metal species that have low cross-section for electron energy loss spectroscopy (EELS), can be faithfully obtained. Optimization of STEM imaging conditions is applied to thick sections as well as thin sections of biological cells under low-dose conditions at room and cryogenic temperatures. Such multimodal capabilities applied to soft/biological structures usher a new era for analytical studies in biological systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Direct tissue analysis by matrix-assisted laser desorption ionization mass spectrometry: application to kidney biology.

    PubMed

    Herring, Kristen D; Oppenheimer, Stacey R; Caprioli, Richard M

    2007-11-01

    Direct tissue analysis using matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) provides in situ molecular analysis of a wide variety of biological molecules including xenobiotics. This technology allows measurement of these species in their native biological environment without the use of target-specific reagents such as antibodies. It can be used to profile discrete cellular regions and obtain region-specific images, providing information on the relative abundance and spatial distribution of proteins, peptides, lipids, and drugs. In this article, we report the sample preparation, MS data acquisition and analysis, and protein identification methodologies used in our laboratory for profiling/imaging MS and how this has been applied to kidney disease and toxicity.

  15. Texture analysis applied to second harmonic generation image data for ovarian cancer classification

    NASA Astrophysics Data System (ADS)

    Wen, Bruce L.; Brewer, Molly A.; Nadiarnykh, Oleg; Hocker, James; Singh, Vikas; Mackie, Thomas R.; Campagnola, Paul J.

    2014-09-01

    Remodeling of the extracellular matrix has been implicated in ovarian cancer. To quantitate the remodeling, we implement a form of texture analysis to delineate the collagen fibrillar morphology observed in second harmonic generation microscopy images of human normal and high grade malignant ovarian tissues. In the learning stage, a dictionary of "textons"-frequently occurring texture features that are identified by measuring the image response to a filter bank of various shapes, sizes, and orientations-is created. By calculating a representative model based on the texton distribution for each tissue type using a training set of respective second harmonic generation images, we then perform classification between images of normal and high grade malignant ovarian tissues. By optimizing the number of textons and nearest neighbors, we achieved classification accuracy up to 97% based on the area under receiver operating characteristic curves (true positives versus false positives). The local analysis algorithm is a more general method to probe rapidly changing fibrillar morphologies than global analyses such as FFT. It is also more versatile than other texture approaches as the filter bank can be highly tailored to specific applications (e.g., different disease states) by creating customized libraries based on common image features.

  16. Detection of Glaucoma Using Image Processing Techniques: A Critique.

    PubMed

    Kumar, B Naveen; Chauhan, R P; Dahiya, Nidhi

    2018-01-01

    The primary objective of this article is to present a summary of different types of image processing methods employed for the detection of glaucoma, a serious eye disease. Glaucoma affects the optic nerve in which retinal ganglion cells become dead, and this leads to loss of vision. The principal cause is the increase in intraocular pressure, which occurs in open-angle and angle-closure glaucoma, the two major types affecting the optic nerve. In the early stages of glaucoma, no perceptible symptoms appear. As the disease progresses, vision starts to become hazy, leading to blindness. Therefore, early detection of glaucoma is needed for prevention. Manual analysis of ophthalmic images is fairly time-consuming and accuracy depends on the expertise of the professionals. Automatic analysis of retinal images is an important tool. Automation aids in the detection, diagnosis, and prevention of risks associated with the disease. Fundus images obtained from a fundus camera have been used for the analysis. Requisite pre-processing techniques have been applied to the image and, depending upon the technique, various classifiers have been used to detect glaucoma. The techniques mentioned in the present review have certain advantages and disadvantages. Based on this study, one can determine which technique provides an optimum result.

  17. A new scheme for velocity analysis and imaging of diffractions

    NASA Astrophysics Data System (ADS)

    Lin, Peng; Peng, Suping; Zhao, Jingtao; Cui, Xiaoqin; Du, Wenfeng

    2018-06-01

    Seismic diffractions are the responses of small-scale inhomogeneities or discontinuous geological features, which play a vital role in the exploitation and development of oil and gas reservoirs. However, diffractions are generally ignored and considered as interference noise in conventional data processing. In this paper, a new scheme for velocity analysis and imaging of seismic diffractions is proposed. Two steps compose of this scheme in our application. First, the plane-wave destruction method is used to separate diffractions from specular reflections in the prestack domain. Second, in order to accurately estimate migration velocity of the diffractions, the time-domain dip-angle gathers are derived from a Kirchhoff-based angle prestack time migration using separated diffractions. Diffraction events appear flat in the dip-angle gathers when imaged above the diffraction point with selected accurate migration velocity for diffractions. The selected migration velocity helps to produce the desired prestack imaging of diffractions. Synthetic and field examples are applied to test the validity of the new scheme. The diffraction imaging results indicate that the proposed scheme for velocity analysis and imaging of diffractions can provide more detailed information about small-scale geologic features for seismic interpretation.

  18. Micro-polarimetry for pre-clinical diagnostics of pathological changes in human tissues

    NASA Astrophysics Data System (ADS)

    Golnik, Andrzej; Golnik, Natalia; Pałko, Tadeusz; Sołtysiński, Tomasz

    2008-05-01

    The paper presents a practical study of several methods of image analysis applied to polarimetric images of regular and malignant human tissues. The images of physiological and pathologically changed tissues from body and cervix of uterus, intestine, kidneys and breast were recorded in transmitted light of different polarization state. The set up of the conventional optical microscope with CCD camera and rotating polarizer's were used for analysis of the polarization state of the light transmitted through the tissue slice for each pixel of the camera image. The set of images corresponding to the different coefficients of the Stockes vectors, a 3×3 subset of the Mueller matrix as well as the maps of the magnitude and in-plane direction of the birefringent components in the sample were calculated. Then, the statistical analysis and the Fourier transform as well as the autocorrelation methods were used to analyze spatial distribution of birefringent elements in the tissue samples. For better recognition of tissue state we proposed a novel method that takes advantage of multiscale image data decomposition The results were used for selection of the optical characteristics with significantly different values for regular and malignant tissues.

  19. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  20. Predicting the amount of coke deposition on catalyst pellets through image analysis and soft computing

    NASA Astrophysics Data System (ADS)

    Zhang, Jingqiong; Zhang, Wenbiao; He, Yuting; Yan, Yong

    2016-11-01

    The amount of coke deposition on catalyst pellets is one of the most important indexes of catalytic property and service life. As a result, it is essential to measure this and analyze the active state of the catalysts during a continuous production process. This paper proposes a new method to predict the amount of coke deposition on catalyst pellets based on image analysis and soft computing. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and feature extraction, twelve effective features are selected and two best feature sets are determined by the prediction tests. A neural network optimized by a particle swarm optimization algorithm is used to establish the prediction model of the coke amount based on various datasets. The root mean square error of the prediction values are all below 0.021 and the coefficient of determination R 2, for the model, are all above 78.71%. Therefore, a feasible, effective and precise method is demonstrated, which may be applied to realize the real-time measurement of coke deposition based on on-line sampling and fast image analysis.

  1. Spectral Properties and Dynamics of Gold Nanorods Revealed by EMCCD Based Spectral-Phasor Method

    PubMed Central

    Chen, Hongtao; Digman, Michelle A.

    2015-01-01

    Gold nanorods (NRs) with tunable plasmon-resonant absorption in the near-infrared region have considerable advantages over organic fluorophores as imaging agents. However, the luminescence spectral properties of NRs have not been fully explored at the single particle level in bulk due to lack of proper analytic tools. Here we present a global spectral phasor analysis method which allows investigations of NRs' spectra at single particle level with their statistic behavior and spatial information during imaging. The wide phasor distribution obtained by the spectral phasor analysis indicates spectra of NRs are different from particle to particle. NRs with different spectra can be identified graphically in corresponding spatial images with high spectral resolution. Furthermore, spectral behaviors of NRs under different imaging conditions, e.g. different excitation powers and wavelengths, were carefully examined by our laser-scanning multiphoton microscope with spectral imaging capability. Our results prove that the spectral phasor method is an easy and efficient tool in hyper-spectral imaging analysis to unravel subtle changes of the emission spectrum. Moreover, we applied this method to study the spectral dynamics of NRs during direct optical trapping and by optothermal trapping. Interestingly, spectral shifts were observed in both trapping phenomena. PMID:25684346

  2. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  3. CBCT-based bone quality assessment: are Hounsfield units applicable?

    PubMed Central

    Jacobs, R; Singer, S R; Mupparapu, M

    2015-01-01

    CBCT is a widely applied imaging modality in dentistry. It enables the visualization of high-contrast structures of the oral region (bone, teeth, air cavities) at a high resolution. CBCT is now commonly used for the assessment of bone quality, primarily for pre-operative implant planning. Traditionally, bone quality parameters and classifications were primarily based on bone density, which could be estimated through the use of Hounsfield units derived from multidetector CT (MDCT) data sets. However, there are crucial differences between MDCT and CBCT, which complicates the use of quantitative gray values (GVs) for the latter. From experimental as well as clinical research, it can be seen that great variability of GVs can exist on CBCT images owing to various reasons that are inherently associated with this technique (i.e. the limited field size, relatively high amount of scattered radiation and limitations of currently applied reconstruction algorithms). Although attempts have been made to correct for GV variability, it can be postulated that the quantitative use of GVs in CBCT should be generally avoided at this time. In addition, recent research and clinical findings have shifted the paradigm of bone quality from a density-based analysis to a structural evaluation of the bone. The ever-improving image quality of CBCT allows it to display trabecular bone patterns, indicating that it may be possible to apply structural analysis methods that are commonly used in micro-CT and histology. PMID:25315442

  4. A generic FPGA-based detector readout and real-time image processing board

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.

  5. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  6. Time-frequency analysis of backscattered signals from diffuse radar targets

    NASA Astrophysics Data System (ADS)

    Kenny, O. P.; Boashash, B.

    1993-06-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.

  7. Colour measurements of pigmented rice grain using flatbed scanning and image analysis

    NASA Astrophysics Data System (ADS)

    Kaisaat, Khotchakorn; Keawdonree, Nuttapong; Chomkokard, Sakchai; Jinuntuya, Noparit; Pattanasiri, Busara

    2017-09-01

    Recently, the National Bureau of Agricultural Commodity and Food Standards (ACFS) have drafted a manual of Thai colour rice standards. However, there are no quantitative descriptions of rice colour and its measurement method. These drawbacks might lead to misunderstanding for people who use the manual. In this work, we proposed an inexpensive method, using flatbed scanning together with image analysis, to quantitatively measure rice colour and colour uniformity. To demonstrate its general applicability for colour differentiation of rice, we applied it to different kinds of pigmented rice, including Riceberry rice with and without uniform colour and Chinese black rice.

  8. Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.

    PubMed

    Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli

    2015-05-01

    2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.

  9. Detection of irradiated quail meat by using DNA comet assay and evaluation of comets by image analysis

    NASA Astrophysics Data System (ADS)

    Erel, Yakup; Yazici, Nizamettin; Özvatan, Sumer; Ercin, Demet; Cetinkaya, Nurcan

    2009-09-01

    A simple technique of microgel electrophoresis of single cells (DNA comet assay) was used to detect DNA comets in irradiated quail meat samples. Obtained DNA comets were evaluated by both photomicrographic and image analysis. Quail meat samples were exposed to radiation doses of 0.52, 1.05, 1.45, 2.00, 2.92 and 4.00 kGy in gamma cell (gammacell 60Co, dose rate 1.31 kGy/h) covering the permissible limits for enzymatic decay and stored at 2 °C. The cells isolated from muscle (chest, thorax) in cold PBS were analyzed using the DNA comet assay on 1, 2, 3, 4, 7, 8 and 11 day post irradiation. The cells were lysed between 2, 5 and 9 min in 2.5% SDS and electrophorosis was carried out at a voltage of 2 V/cm for 2 min. After propidium iodide staining, the slides were evaluated through a fluorescent microscope. In all irradiated samples, fragmented DNA stretched towards the anode and damaged cells appeared as a comet. All measurement data were analyzed using BS 200 ProP with software image analysis (BS 200 ProP, BAB Imaging System, Ankara, Turkey). The density of DNA in the tails increased with increasing radiation dose. However, in non-irradiated samples, the large molecules of DNA remained relatively intact and there was only minor or no migration of DNA; the cells were round or had very short tails only. The values of tail DNA%, tail length and tail moment were significantly different and identical between 0.9 and 4.0 kGy dose exposure, and also among storage times on day 1, 4 and 8. In conclusion, the DNA Comet Assay EN 13784 standard method may be used not only for screening method for detection of irradiated quail meat depending on storage time and condition but also for the quantification of applied dose if it is combined with image analysis. Image analysis may provide a powerful tool for the evaluation of head and tail of comet intensity related with applied doses.

  10. Multifractal and Singularity Maps of soil surface moisture distribution derived from 2D image analysis.

    NASA Astrophysics Data System (ADS)

    Cumbrera, Ramiro; Millán, Humberto; Martín-Sotoca, Juan Jose; Pérez Soto, Luis; Sanchez, Maria Elena; Tarquis, Ana Maria

    2016-04-01

    Soil moisture distribution usually presents extreme variation at multiple spatial scales. Image analysis could be a useful tool for investigating these spatial patterns of apparent soil moisture at multiple resolutions. The objectives of the present work were (i) to describe the local scaling of apparent soil moisture distribution and (ii) to define apparent soil moisture patterns from vertical planes of Vertisol pit images. Two soil pits (0.70 m long × 0.60 m width × 0.30 m depth) were excavated on a bare Mazic Pellic Vertisol. One was excavated in April/2011 and the other pit was established in May/2011 after 3 days of a moderate rainfall event. Digital photographs were taken from each Vertisol pit using a Kodak™ digital camera. The mean image size was 1600 × 945 pixels with one physical pixel ≈373 μm of the photographed soil pit. For more details see Cumbrera et al. (2012). Geochemical exploration have found with increasingly interests and benefits of using fractal (power-law) models to characterize geochemical distribution, using the concentration-area (C-A) model (Cheng et al., 1994; Cheng, 2012). This method is based on the singularity maps of a measure that at each point define areas with self-similar properties that are shown in power-law relationships in Concentration-Area plots (C-A method). The C-A method together with the singularity map ("Singularity-CA" method) define thresholds that can be applied to segment the map. We have applied it to each soil image. The results show that, in spite of some computational and practical limitations, image analysis of apparent soil moisture patterns could be used to study the dynamical change of soil moisture sampling in agreement with previous results (Millán et al., 2016). REFERENCES Cheng, Q., Agterberg, F. P. and Ballantyne, S. B. (1994). The separation of geochemical anomalies from background by fractal methods. Journal of Geochemical Exploration, 51, 109-130. Cheng, Q. (2012). Singularity theory and methods for mapping geochemical anomalies caused by buried sources and for predicting undiscovered mineral deposits in covered areas. Journal of Geochemical Exploration, 122, 55-70. Cumbrera, R., Ana M. Tarquis, Gabriel Gascó, Humberto Millán (2012) Fractal scaling of apparent soil moisture estimated from vertical planes of Vertisol pit images. Journal of Hydrology (452-453), 205-212. Martin Sotoca; J.J. Antonio Saa-Requejo, Juan Grau and Ana M. Tarquis (2016). Segmentation of singularity maps in the context of soil porosity. Geophysical Research Abstracts, 18, EGU2016-11402. Millán, H., Cumbrera, R. and Ana M. Tarquis (2016) Multifractal and Levy-stable statistics of soil surface moisture distribution derived from 2D image analysis. Applied Mathematical Modelling, 40(3), 2384-2395.

  11. Soft X-ray astronomy using grazing incidence optics

    NASA Technical Reports Server (NTRS)

    Davis, John M.

    1989-01-01

    The instrumental background of X-ray astronomy with an emphasis on high resolution imagery is outlined. Optical and system performance, in terms of resolution, are compared and methods for improving the latter in finite length instruments described. The method of analysis of broadband images to obtain diagnostic information is described and is applied to the analysis of coronal structures.

  12. An augmented parametric response map with consideration of image registration error: towards guidance of locally adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Lausch, Anthony; Chen, Jeff; Ward, Aaron D.; Gaede, Stewart; Lee, Ting-Yim; Wong, Eugene

    2014-11-01

    Parametric response map (PRM) analysis is a voxel-wise technique for predicting overall treatment outcome, which shows promise as a tool for guiding personalized locally adaptive radiotherapy (RT). However, image registration error (IRE) introduces uncertainty into this analysis which may limit its use for guiding RT. Here we extend the PRM method to include an IRE-related PRM analysis confidence interval and also incorporate multiple graded classification thresholds to facilitate visualization. A Gaussian IRE model was used to compute an expected value and confidence interval for PRM analysis. The augmented PRM (A-PRM) was evaluated using CT-perfusion functional image data from patients treated with RT for glioma and hepatocellular carcinoma. Known rigid IREs were simulated by applying one thousand different rigid transformations to each image set. PRM and A-PRM analyses of the transformed images were then compared to analyses of the original images (ground truth) in order to investigate the two methods in the presence of controlled IRE. The A-PRM was shown to help visualize and quantify IRE-related analysis uncertainty. The use of multiple graded classification thresholds also provided additional contextual information which could be useful for visually identifying adaptive RT targets (e.g. sub-volume boosts). The A-PRM should facilitate reliable PRM guided adaptive RT by allowing the user to identify if a patient’s unique IRE-related PRM analysis uncertainty has the potential to influence target delineation.

  13. High-Performance Computational Analysis of Glioblastoma Pathology Images with Database Support Identifies Molecular and Survival Correlates.

    PubMed

    Kong, Jun; Wang, Fusheng; Teodoro, George; Cooper, Lee; Moreno, Carlos S; Kurc, Tahsin; Pan, Tony; Saltz, Joel; Brat, Daniel

    2013-12-01

    In this paper, we present a novel framework for microscopic image analysis of nuclei, data management, and high performance computation to support translational research involving nuclear morphometry features, molecular data, and clinical outcomes. Our image analysis pipeline consists of nuclei segmentation and feature computation facilitated by high performance computing with coordinated execution in multi-core CPUs and Graphical Processor Units (GPUs). All data derived from image analysis are managed in a spatial relational database supporting highly efficient scientific queries. We applied our image analysis workflow to 159 glioblastomas (GBM) from The Cancer Genome Atlas dataset. With integrative studies, we found statistics of four specific nuclear features were significantly associated with patient survival. Additionally, we correlated nuclear features with molecular data and found interesting results that support pathologic domain knowledge. We found that Proneural subtype GBMs had the smallest mean of nuclear Eccentricity and the largest mean of nuclear Extent, and MinorAxisLength. We also found gene expressions of stem cell marker MYC and cell proliferation maker MKI67 were correlated with nuclear features. To complement and inform pathologists of relevant diagnostic features, we queried the most representative nuclear instances from each patient population based on genetic and transcriptional classes. Our results demonstrate that specific nuclear features carry prognostic significance and associations with transcriptional and genetic classes, highlighting the potential of high throughput pathology image analysis as a complementary approach to human-based review and translational research.

  14. MSL: Facilitating automatic and physical analysis of published scientific literature in PDF format

    PubMed Central

    Ahmed, Zeeshan; Dandekar, Thomas

    2018-01-01

    Published scientific literature contains millions of figures, including information about the results obtained from different scientific experiments e.g. PCR-ELISA data, microarray analysis, gel electrophoresis, mass spectrometry data, DNA/RNA sequencing, diagnostic imaging (CT/MRI and ultrasound scans), and medicinal imaging like electroencephalography (EEG), magnetoencephalography (MEG), echocardiography  (ECG), positron-emission tomography (PET) images. The importance of biomedical figures has been widely recognized in scientific and medicine communities, as they play a vital role in providing major original data, experimental and computational results in concise form. One major challenge for implementing a system for scientific literature analysis is extracting and analyzing text and figures from published PDF files by physical and logical document analysis. Here we present a product line architecture based bioinformatics tool ‘Mining Scientific Literature (MSL)’, which supports the extraction of text and images by interpreting all kinds of published PDF files using advanced data mining and image processing techniques. It provides modules for the marginalization of extracted text based on different coordinates and keywords, visualization of extracted figures and extraction of embedded text from all kinds of biological and biomedical figures using applied Optimal Character Recognition (OCR). Moreover, for further analysis and usage, it generates the system’s output in different formats including text, PDF, XML and images files. Hence, MSL is an easy to install and use analysis tool to interpret published scientific literature in PDF format. PMID:29721305

  15. Automatic detection of cone photoreceptors in split detector adaptive optics scanning light ophthalmoscope images.

    PubMed

    Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina

    2016-05-01

    Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.

  16. Photothermal technique in cell microscopy studies

    NASA Astrophysics Data System (ADS)

    Lapotko, Dmitry; Chebot'ko, Igor; Kutchinsky, Georgy; Cherenkevitch, Sergey

    1995-01-01

    Photothermal (PT) method is applied for a cell imaging and quantitative studies. The techniques for cell monitoring, imaging and cell viability test are developed. The method and experimental set up for optical and PT-image acquisition and analysis is described. Dual- pulsed laser set up combined with phase contrast illumination of a sample provides visualization of temperature field or absorption structure of a sample with spatial resolution 0.5 micrometers . The experimental optics, hardware and software are designed using the modular principle, so the whole set up can be adjusted for various experiments: PT-response monitoring or photothermal spectroscopy studies. Sensitivity of PT-method provides the imaging of the structural elements of live (non-stained) white blood cells. The results of experiments with normal and subnormal blood cells (red blood cells, lymphocytes, neutrophyles and lymphoblasts) are reported. Obtained PT-images are different from optical analogs and deliver additional information about cell structure. The quantitative analysis of images was used for cell population comparative diagnostic. The viability test for red blood cell differentiation is described. During the study of neutrophyles in norma and sarcoidosis disease the differences in PT-images of cells were found.

  17. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  18. DICOM for quantitative imaging biomarker development: a standards based approach to sharing clinical data and structured PET/CT analysis results in head and neck cancer research

    PubMed Central

    Clunie, David; Ulrich, Ethan; Bauer, Christian; Wahle, Andreas; Brown, Bartley; Onken, Michael; Riesmeier, Jörg; Pieper, Steve; Kikinis, Ron; Buatti, John; Beichel, Reinhard R.

    2016-01-01

    Background. Imaging biomarkers hold tremendous promise for precision medicine clinical applications. Development of such biomarkers relies heavily on image post-processing tools for automated image quantitation. Their deployment in the context of clinical research necessitates interoperability with the clinical systems. Comparison with the established outcomes and evaluation tasks motivate integration of the clinical and imaging data, and the use of standardized approaches to support annotation and sharing of the analysis results and semantics. We developed the methodology and tools to support these tasks in Positron Emission Tomography and Computed Tomography (PET/CT) quantitative imaging (QI) biomarker development applied to head and neck cancer (HNC) treatment response assessment, using the Digital Imaging and Communications in Medicine (DICOM®) international standard and free open-source software. Methods. Quantitative analysis of PET/CT imaging data collected on patients undergoing treatment for HNC was conducted. Processing steps included Standardized Uptake Value (SUV) normalization of the images, segmentation of the tumor using manual and semi-automatic approaches, automatic segmentation of the reference regions, and extraction of the volumetric segmentation-based measurements. Suitable components of the DICOM standard were identified to model the various types of data produced by the analysis. A developer toolkit of conversion routines and an Application Programming Interface (API) were contributed and applied to create a standards-based representation of the data. Results. DICOM Real World Value Mapping, Segmentation and Structured Reporting objects were utilized for standards-compliant representation of the PET/CT QI analysis results and relevant clinical data. A number of correction proposals to the standard were developed. The open-source DICOM toolkit (DCMTK) was improved to simplify the task of DICOM encoding by introducing new API abstractions. Conversion and visualization tools utilizing this toolkit were developed. The encoded objects were validated for consistency and interoperability. The resulting dataset was deposited in the QIN-HEADNECK collection of The Cancer Imaging Archive (TCIA). Supporting tools for data analysis and DICOM conversion were made available as free open-source software. Discussion. We presented a detailed investigation of the development and application of the DICOM model, as well as the supporting open-source tools and toolkits, to accommodate representation of the research data in QI biomarker development. We demonstrated that the DICOM standard can be used to represent the types of data relevant in HNC QI biomarker development, and encode their complex relationships. The resulting annotated objects are amenable to data mining applications, and are interoperable with a variety of systems that support the DICOM standard. PMID:27257542

  19. Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.

  20. Generating High-Temporal and Spatial Resolution TIR Image Data

    NASA Astrophysics Data System (ADS)

    Herrero-Huerta, M.; Lagüela, S.; Alfieri, S. M.; Menenti, M.

    2017-09-01

    Remote sensing imagery to monitor global biophysical dynamics requires the availability of thermal infrared data at high temporal and spatial resolution because of the rapid development of crops during the growing season and the fragmentation of most agricultural landscapes. Conversely, no single sensor meets these combined requirements. Data fusion approaches offer an alternative to exploit observations from multiple sensors, providing data sets with better properties. A novel spatio-temporal data fusion model based on constrained algorithms denoted as multisensor multiresolution technique (MMT) was developed and applied to generate TIR synthetic image data at both temporal and spatial high resolution. Firstly, an adaptive radiance model is applied based on spectral unmixing analysis of . TIR radiance data at TOA (top of atmosphere) collected by MODIS daily 1-km and Landsat - TIRS 16-day sampled at 30-m resolution are used to generate synthetic daily radiance images at TOA at 30-m spatial resolution. The next step consists of unmixing the 30 m (now lower resolution) images using the information about their pixel land-cover composition from co-registered images at higher spatial resolution. In our case study, TIR synthesized data were unmixed to the Sentinel 2 MSI with 10 m resolution. The constrained unmixing preserves all the available radiometric information of the 30 m images and involves the optimization of the number of land-cover classes and the size of the moving window for spatial unmixing. Results are still being evaluated, with particular attention for the quality of the data streams required to apply our approach.

Top